2025-06-02 19:15:29.892311 | Job console starting 2025-06-02 19:15:29.906694 | Updating git repos 2025-06-02 19:15:29.988246 | Cloning repos into workspace 2025-06-02 19:15:30.170220 | Restoring repo states 2025-06-02 19:15:30.185793 | Merging changes 2025-06-02 19:15:30.185821 | Checking out repos 2025-06-02 19:15:30.480505 | Preparing playbooks 2025-06-02 19:15:31.141988 | Running Ansible setup 2025-06-02 19:15:35.433009 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-02 19:15:36.187735 | 2025-06-02 19:15:36.187913 | PLAY [Base pre] 2025-06-02 19:15:36.205166 | 2025-06-02 19:15:36.205322 | TASK [Setup log path fact] 2025-06-02 19:15:36.236901 | orchestrator | ok 2025-06-02 19:15:36.254585 | 2025-06-02 19:15:36.254728 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 19:15:36.296625 | orchestrator | ok 2025-06-02 19:15:36.309086 | 2025-06-02 19:15:36.309236 | TASK [emit-job-header : Print job information] 2025-06-02 19:15:36.371064 | # Job Information 2025-06-02 19:15:36.371471 | Ansible Version: 2.16.14 2025-06-02 19:15:36.371560 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-06-02 19:15:36.371645 | Pipeline: post 2025-06-02 19:15:36.371703 | Executor: 521e9411259a 2025-06-02 19:15:36.371755 | Triggered by: https://github.com/osism/testbed/commit/ac006a7fea378f1b38fc889be9ab54b480327f41 2025-06-02 19:15:36.371810 | Event ID: eee0cf1c-3fe5-11f0-8b84-2af3573f70ae 2025-06-02 19:15:36.390271 | 2025-06-02 19:15:36.390501 | LOOP [emit-job-header : Print node information] 2025-06-02 19:15:36.531613 | orchestrator | ok: 2025-06-02 19:15:36.532005 | orchestrator | # Node Information 2025-06-02 19:15:36.532081 | orchestrator | Inventory Hostname: orchestrator 2025-06-02 19:15:36.532128 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-02 19:15:36.532215 | orchestrator | Username: zuul-testbed03 2025-06-02 19:15:36.532253 | orchestrator | Distro: Debian 12.11 2025-06-02 19:15:36.532295 | orchestrator | Provider: static-testbed 2025-06-02 19:15:36.532333 | orchestrator | Region: 2025-06-02 19:15:36.532371 | orchestrator | Label: testbed-orchestrator 2025-06-02 19:15:36.532406 | orchestrator | Product Name: OpenStack Nova 2025-06-02 19:15:36.532438 | orchestrator | Interface IP: 81.163.193.140 2025-06-02 19:15:36.561758 | 2025-06-02 19:15:36.561896 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-02 19:15:37.072339 | orchestrator -> localhost | changed 2025-06-02 19:15:37.080535 | 2025-06-02 19:15:37.080668 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-02 19:15:38.153675 | orchestrator -> localhost | changed 2025-06-02 19:15:38.169339 | 2025-06-02 19:15:38.169490 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-02 19:15:38.467319 | orchestrator -> localhost | ok 2025-06-02 19:15:38.474547 | 2025-06-02 19:15:38.474671 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-02 19:15:38.508325 | orchestrator | ok 2025-06-02 19:15:38.524919 | orchestrator | included: /var/lib/zuul/builds/967e2dde244849e8aeeebb16e5f5ee2e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-02 19:15:38.533164 | 2025-06-02 19:15:38.533284 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-02 19:15:40.226528 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-02 19:15:40.226937 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/967e2dde244849e8aeeebb16e5f5ee2e/work/967e2dde244849e8aeeebb16e5f5ee2e_id_rsa 2025-06-02 19:15:40.227020 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/967e2dde244849e8aeeebb16e5f5ee2e/work/967e2dde244849e8aeeebb16e5f5ee2e_id_rsa.pub 2025-06-02 19:15:40.227076 | orchestrator -> localhost | The key fingerprint is: 2025-06-02 19:15:40.227125 | orchestrator -> localhost | SHA256:EZY1c9lJ+8rsxxbCB23mLmaQQ73L0iCiTmyfmxKfI50 zuul-build-sshkey 2025-06-02 19:15:40.227230 | orchestrator -> localhost | The key's randomart image is: 2025-06-02 19:15:40.227295 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-02 19:15:40.227343 | orchestrator -> localhost | | oo+ .+.. | 2025-06-02 19:15:40.227387 | orchestrator -> localhost | | ... +. o. | 2025-06-02 19:15:40.227430 | orchestrator -> localhost | | . . o | 2025-06-02 19:15:40.227470 | orchestrator -> localhost | | . . o = | 2025-06-02 19:15:40.227510 | orchestrator -> localhost | | S . o * .| 2025-06-02 19:15:40.227561 | orchestrator -> localhost | | .. . . = * = | 2025-06-02 19:15:40.227605 | orchestrator -> localhost | | += + . * O..| 2025-06-02 19:15:40.227648 | orchestrator -> localhost | | o+.Eo . O .+| 2025-06-02 19:15:40.227691 | orchestrator -> localhost | | ..o=o + oo | 2025-06-02 19:15:40.227733 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-02 19:15:40.227836 | orchestrator -> localhost | ok: Runtime: 0:00:01.192175 2025-06-02 19:15:40.245432 | 2025-06-02 19:15:40.245629 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-02 19:15:40.275662 | orchestrator | ok 2025-06-02 19:15:40.286124 | orchestrator | included: /var/lib/zuul/builds/967e2dde244849e8aeeebb16e5f5ee2e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-02 19:15:40.295332 | 2025-06-02 19:15:40.295438 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-02 19:15:40.318926 | orchestrator | skipping: Conditional result was False 2025-06-02 19:15:40.326880 | 2025-06-02 19:15:40.326984 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-02 19:15:41.026444 | orchestrator | changed 2025-06-02 19:15:41.032841 | 2025-06-02 19:15:41.032952 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-02 19:15:41.310307 | orchestrator | ok 2025-06-02 19:15:41.319451 | 2025-06-02 19:15:41.319587 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-02 19:15:41.961437 | orchestrator | ok 2025-06-02 19:15:41.969590 | 2025-06-02 19:15:41.969728 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-02 19:15:42.383123 | orchestrator | ok 2025-06-02 19:15:42.393002 | 2025-06-02 19:15:42.393157 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-02 19:15:42.417707 | orchestrator | skipping: Conditional result was False 2025-06-02 19:15:42.433038 | 2025-06-02 19:15:42.433206 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-02 19:15:42.886590 | orchestrator -> localhost | changed 2025-06-02 19:15:42.900853 | 2025-06-02 19:15:42.900981 | TASK [add-build-sshkey : Add back temp key] 2025-06-02 19:15:43.218817 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/967e2dde244849e8aeeebb16e5f5ee2e/work/967e2dde244849e8aeeebb16e5f5ee2e_id_rsa (zuul-build-sshkey) 2025-06-02 19:15:43.219092 | orchestrator -> localhost | ok: Runtime: 0:00:00.009678 2025-06-02 19:15:43.226423 | 2025-06-02 19:15:43.226527 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-02 19:15:43.639727 | orchestrator | ok 2025-06-02 19:15:43.649314 | 2025-06-02 19:15:43.649452 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-02 19:15:43.683793 | orchestrator | skipping: Conditional result was False 2025-06-02 19:15:43.747091 | 2025-06-02 19:15:43.747257 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-02 19:15:44.171706 | orchestrator | ok 2025-06-02 19:15:44.188088 | 2025-06-02 19:15:44.188291 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-02 19:15:44.229447 | orchestrator | ok 2025-06-02 19:15:44.237337 | 2025-06-02 19:15:44.237456 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-02 19:15:44.555089 | orchestrator -> localhost | ok 2025-06-02 19:15:44.563347 | 2025-06-02 19:15:44.563469 | TASK [validate-host : Collect information about the host] 2025-06-02 19:15:45.764343 | orchestrator | ok 2025-06-02 19:15:45.784585 | 2025-06-02 19:15:45.784762 | TASK [validate-host : Sanitize hostname] 2025-06-02 19:15:45.844693 | orchestrator | ok 2025-06-02 19:15:45.853870 | 2025-06-02 19:15:45.854603 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-02 19:15:46.426311 | orchestrator -> localhost | changed 2025-06-02 19:15:46.433556 | 2025-06-02 19:15:46.433693 | TASK [validate-host : Collect information about zuul worker] 2025-06-02 19:15:46.871276 | orchestrator | ok 2025-06-02 19:15:46.878928 | 2025-06-02 19:15:46.879077 | TASK [validate-host : Write out all zuul information for each host] 2025-06-02 19:15:47.461362 | orchestrator -> localhost | changed 2025-06-02 19:15:47.480359 | 2025-06-02 19:15:47.480514 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-02 19:15:47.765342 | orchestrator | ok 2025-06-02 19:15:47.771754 | 2025-06-02 19:15:47.771862 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-02 19:16:27.510448 | orchestrator | changed: 2025-06-02 19:16:27.510760 | orchestrator | .d..t...... src/ 2025-06-02 19:16:27.510817 | orchestrator | .d..t...... src/github.com/ 2025-06-02 19:16:27.510885 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-02 19:16:27.510924 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-02 19:16:27.510961 | orchestrator | RedHat.yml 2025-06-02 19:16:27.524417 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-02 19:16:27.524434 | orchestrator | RedHat.yml 2025-06-02 19:16:27.524486 | orchestrator | = 2.2.0"... 2025-06-02 19:16:41.666314 | orchestrator | 19:16:41.666 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-02 19:16:41.744217 | orchestrator | 19:16:41.744 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-06-02 19:16:42.776306 | orchestrator | 19:16:42.776 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-02 19:16:43.620587 | orchestrator | 19:16:43.620 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 19:16:44.620723 | orchestrator | 19:16:44.620 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-02 19:16:45.631107 | orchestrator | 19:16:45.630 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-02 19:16:46.668962 | orchestrator | 19:16:46.668 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-02 19:16:47.524700 | orchestrator | 19:16:47.524 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 19:16:47.524972 | orchestrator | 19:16:47.524 STDOUT terraform: Providers are signed by their developers. 2025-06-02 19:16:47.524986 | orchestrator | 19:16:47.524 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-02 19:16:47.524991 | orchestrator | 19:16:47.524 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-02 19:16:47.525225 | orchestrator | 19:16:47.525 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-02 19:16:47.525238 | orchestrator | 19:16:47.525 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-02 19:16:47.525245 | orchestrator | 19:16:47.525 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-02 19:16:47.525250 | orchestrator | 19:16:47.525 STDOUT terraform: you run "tofu init" in the future. 2025-06-02 19:16:47.525793 | orchestrator | 19:16:47.525 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-02 19:16:47.526163 | orchestrator | 19:16:47.525 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-02 19:16:47.526175 | orchestrator | 19:16:47.525 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-02 19:16:47.526179 | orchestrator | 19:16:47.525 STDOUT terraform: should now work. 2025-06-02 19:16:47.526184 | orchestrator | 19:16:47.525 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-02 19:16:47.526188 | orchestrator | 19:16:47.526 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-02 19:16:47.526193 | orchestrator | 19:16:47.526 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-02 19:16:47.702192 | orchestrator | 19:16:47.701 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-02 19:16:47.907252 | orchestrator | 19:16:47.907 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-02 19:16:47.908020 | orchestrator | 19:16:47.907 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-02 19:16:47.908045 | orchestrator | 19:16:47.907 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-02 19:16:47.908059 | orchestrator | 19:16:47.907 STDOUT terraform: for this configuration. 2025-06-02 19:16:48.145848 | orchestrator | 19:16:48.145 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-02 19:16:48.240091 | orchestrator | 19:16:48.239 STDOUT terraform: ci.auto.tfvars 2025-06-02 19:16:48.245709 | orchestrator | 19:16:48.245 STDOUT terraform: default_custom.tf 2025-06-02 19:16:48.457847 | orchestrator | 19:16:48.457 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-02 19:16:49.398340 | orchestrator | 19:16:49.398 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-02 19:16:49.945454 | orchestrator | 19:16:49.945 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-02 19:16:50.153110 | orchestrator | 19:16:50.152 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-02 19:16:50.153217 | orchestrator | 19:16:50.152 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-02 19:16:50.153225 | orchestrator | 19:16:50.153 STDOUT terraform:  + create 2025-06-02 19:16:50.153232 | orchestrator | 19:16:50.153 STDOUT terraform:  <= read (data resources) 2025-06-02 19:16:50.153241 | orchestrator | 19:16:50.153 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-02 19:16:50.153429 | orchestrator | 19:16:50.153 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-02 19:16:50.153536 | orchestrator | 19:16:50.153 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 19:16:50.153607 | orchestrator | 19:16:50.153 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-02 19:16:50.153671 | orchestrator | 19:16:50.153 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 19:16:50.153756 | orchestrator | 19:16:50.153 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 19:16:50.153875 | orchestrator | 19:16:50.153 STDOUT terraform:  + file = (known after apply) 2025-06-02 19:16:50.153965 | orchestrator | 19:16:50.153 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.154080 | orchestrator | 19:16:50.153 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.154151 | orchestrator | 19:16:50.154 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 19:16:50.154287 | orchestrator | 19:16:50.154 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 19:16:50.154395 | orchestrator | 19:16:50.154 STDOUT terraform:  + most_recent = true 2025-06-02 19:16:50.154528 | orchestrator | 19:16:50.154 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:50.154712 | orchestrator | 19:16:50.154 STDOUT terraform:  + protected = (known after apply) 2025-06-02 19:16:50.154849 | orchestrator | 19:16:50.154 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.154977 | orchestrator | 19:16:50.154 STDOUT terraform:  + schema = (known after apply) 2025-06-02 19:16:50.155129 | orchestrator | 19:16:50.154 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 19:16:50.155245 | orchestrator | 19:16:50.155 STDOUT terraform:  + tags = (known after apply) 2025-06-02 19:16:50.155390 | orchestrator | 19:16:50.155 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 19:16:50.155442 | orchestrator | 19:16:50.155 STDOUT terraform:  } 2025-06-02 19:16:50.155654 | orchestrator | 19:16:50.155 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-02 19:16:50.155853 | orchestrator | 19:16:50.155 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 19:16:50.156004 | orchestrator | 19:16:50.155 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-02 19:16:50.156159 | orchestrator | 19:16:50.156 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 19:16:50.156287 | orchestrator | 19:16:50.156 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 19:16:50.156430 | orchestrator | 19:16:50.156 STDOUT terraform:  + file = (known after apply) 2025-06-02 19:16:50.156558 | orchestrator | 19:16:50.156 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.156681 | orchestrator | 19:16:50.156 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.156844 | orchestrator | 19:16:50.156 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 19:16:50.156976 | orchestrator | 19:16:50.156 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 19:16:50.157051 | orchestrator | 19:16:50.156 STDOUT terraform:  + most_recent = true 2025-06-02 19:16:50.157182 | orchestrator | 19:16:50.157 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:50.157297 | orchestrator | 19:16:50.157 STDOUT terraform:  + protected = (known after apply) 2025-06-02 19:16:50.157422 | orchestrator | 19:16:50.157 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.157583 | orchestrator | 19:16:50.157 STDOUT terraform:  + schema = (known after apply) 2025-06-02 19:16:50.157716 | orchestrator | 19:16:50.157 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 19:16:50.157874 | orchestrator | 19:16:50.157 STDOUT terraform:  + tags = (known after apply) 2025-06-02 19:16:50.158001 | orchestrator | 19:16:50.157 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 19:16:50.158097 | orchestrator | 19:16:50.158 STDOUT terraform:  } 2025-06-02 19:16:50.158236 | orchestrator | 19:16:50.158 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-02 19:16:50.158346 | orchestrator | 19:16:50.158 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-02 19:16:50.158492 | orchestrator | 19:16:50.158 STDOUT terraform:  + content = (known after apply) 2025-06-02 19:16:50.158629 | orchestrator | 19:16:50.158 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 19:16:50.158812 | orchestrator | 19:16:50.158 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 19:16:50.158968 | orchestrator | 19:16:50.158 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 19:16:50.159100 | orchestrator | 19:16:50.158 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 19:16:50.159220 | orchestrator | 19:16:50.159 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 19:16:50.159320 | orchestrator | 19:16:50.159 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 19:16:50.159384 | orchestrator | 19:16:50.159 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 19:16:50.159450 | orchestrator | 19:16:50.159 STDOUT terraform:  + file_permission = "0644" 2025-06-02 19:16:50.159593 | orchestrator | 19:16:50.159 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-02 19:16:50.159682 | orchestrator | 19:16:50.159 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.159717 | orchestrator | 19:16:50.159 STDOUT terraform:  } 2025-06-02 19:16:50.159791 | orchestrator | 19:16:50.159 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-02 19:16:50.159874 | orchestrator | 19:16:50.159 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-02 19:16:50.159993 | orchestrator | 19:16:50.159 STDOUT terraform:  + content = (known after apply) 2025-06-02 19:16:50.160089 | orchestrator | 19:16:50.159 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 19:16:50.160158 | orchestrator | 19:16:50.160 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 19:16:50.160219 | orchestrator | 19:16:50.160 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 19:16:50.160282 | orchestrator | 19:16:50.160 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 19:16:50.160351 | orchestrator | 19:16:50.160 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 19:16:50.160413 | orchestrator | 19:16:50.160 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 19:16:50.160455 | orchestrator | 19:16:50.160 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 19:16:50.160507 | orchestrator | 19:16:50.160 STDOUT terraform:  + file_permission = "0644" 2025-06-02 19:16:50.160577 | orchestrator | 19:16:50.160 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-02 19:16:50.160653 | orchestrator | 19:16:50.160 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.160693 | orchestrator | 19:16:50.160 STDOUT terraform:  } 2025-06-02 19:16:50.160763 | orchestrator | 19:16:50.160 STDOUT terraform:  # local_file.inventory will be created 2025-06-02 19:16:50.160807 | orchestrator | 19:16:50.160 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-02 19:16:50.160881 | orchestrator | 19:16:50.160 STDOUT terraform:  + content = (known after apply) 2025-06-02 19:16:50.160997 | orchestrator | 19:16:50.160 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 19:16:50.161060 | orchestrator | 19:16:50.160 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 19:16:50.161123 | orchestrator | 19:16:50.161 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 19:16:50.161221 | orchestrator | 19:16:50.161 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 19:16:50.161275 | orchestrator | 19:16:50.161 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 19:16:50.161337 | orchestrator | 19:16:50.161 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 19:16:50.161392 | orchestrator | 19:16:50.161 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 19:16:50.161436 | orchestrator | 19:16:50.161 STDOUT terraform:  + file_permission = "0644" 2025-06-02 19:16:50.161489 | orchestrator | 19:16:50.161 STDOUT terraform:  + filename = "inventory.ci" 2025-06-02 19:16:50.161562 | orchestrator | 19:16:50.161 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.161587 | orchestrator | 19:16:50.161 STDOUT terraform:  } 2025-06-02 19:16:50.161647 | orchestrator | 19:16:50.161 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-02 19:16:50.161699 | orchestrator | 19:16:50.161 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-02 19:16:50.161788 | orchestrator | 19:16:50.161 STDOUT terraform:  + content = (sensitive value) 2025-06-02 19:16:50.161850 | orchestrator | 19:16:50.161 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 19:16:50.161911 | orchestrator | 19:16:50.161 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 19:16:50.161973 | orchestrator | 19:16:50.161 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 19:16:50.162081 | orchestrator | 19:16:50.161 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 19:16:50.162159 | orchestrator | 19:16:50.162 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 19:16:50.162220 | orchestrator | 19:16:50.162 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 19:16:50.162261 | orchestrator | 19:16:50.162 STDOUT terraform:  + directory_permission = "0700" 2025-06-02 19:16:50.162304 | orchestrator | 19:16:50.162 STDOUT terraform:  + file_permission = "0600" 2025-06-02 19:16:50.162357 | orchestrator | 19:16:50.162 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-02 19:16:50.163301 | orchestrator | 19:16:50.162 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.163337 | orchestrator | 19:16:50.163 STDOUT terraform:  } 2025-06-02 19:16:50.163390 | orchestrator | 19:16:50.163 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-02 19:16:50.163443 | orchestrator | 19:16:50.163 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-02 19:16:50.163486 | orchestrator | 19:16:50.163 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.163512 | orchestrator | 19:16:50.163 STDOUT terraform:  } 2025-06-02 19:16:50.163598 | orchestrator | 19:16:50.163 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-02 19:16:50.163681 | orchestrator | 19:16:50.163 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-02 19:16:50.163784 | orchestrator | 19:16:50.163 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.163831 | orchestrator | 19:16:50.163 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.163895 | orchestrator | 19:16:50.163 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.163953 | orchestrator | 19:16:50.163 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.164013 | orchestrator | 19:16:50.163 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.164087 | orchestrator | 19:16:50.164 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-02 19:16:50.164150 | orchestrator | 19:16:50.164 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.164177 | orchestrator | 19:16:50.164 STDOUT terraform:  + size = 80 2025-06-02 19:16:50.164217 | orchestrator | 19:16:50.164 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.164275 | orchestrator | 19:16:50.164 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.164303 | orchestrator | 19:16:50.164 STDOUT terraform:  } 2025-06-02 19:16:50.164378 | orchestrator | 19:16:50.164 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-02 19:16:50.164452 | orchestrator | 19:16:50.164 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:50.164511 | orchestrator | 19:16:50.164 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.164549 | orchestrator | 19:16:50.164 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.164607 | orchestrator | 19:16:50.164 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.164664 | orchestrator | 19:16:50.164 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.164724 | orchestrator | 19:16:50.164 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.164809 | orchestrator | 19:16:50.164 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-02 19:16:50.164866 | orchestrator | 19:16:50.164 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.164899 | orchestrator | 19:16:50.164 STDOUT terraform:  + size = 80 2025-06-02 19:16:50.164941 | orchestrator | 19:16:50.164 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.164980 | orchestrator | 19:16:50.164 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.165003 | orchestrator | 19:16:50.164 STDOUT terraform:  } 2025-06-02 19:16:50.165080 | orchestrator | 19:16:50.164 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-02 19:16:50.165152 | orchestrator | 19:16:50.165 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:50.165215 | orchestrator | 19:16:50.165 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.165251 | orchestrator | 19:16:50.165 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.165310 | orchestrator | 19:16:50.165 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.165365 | orchestrator | 19:16:50.165 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.165424 | orchestrator | 19:16:50.165 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.165495 | orchestrator | 19:16:50.165 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-02 19:16:50.165553 | orchestrator | 19:16:50.165 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.165585 | orchestrator | 19:16:50.165 STDOUT terraform:  + size = 80 2025-06-02 19:16:50.165624 | orchestrator | 19:16:50.165 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.165665 | orchestrator | 19:16:50.165 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.165674 | orchestrator | 19:16:50.165 STDOUT terraform:  } 2025-06-02 19:16:50.165831 | orchestrator | 19:16:50.165 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-02 19:16:50.165904 | orchestrator | 19:16:50.165 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:50.165963 | orchestrator | 19:16:50.165 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.165999 | orchestrator | 19:16:50.165 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.166096 | orchestrator | 19:16:50.165 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.166152 | orchestrator | 19:16:50.166 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.166210 | orchestrator | 19:16:50.166 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.166289 | orchestrator | 19:16:50.166 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-02 19:16:50.166341 | orchestrator | 19:16:50.166 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.166375 | orchestrator | 19:16:50.166 STDOUT terraform:  + size = 80 2025-06-02 19:16:50.166417 | orchestrator | 19:16:50.166 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.166456 | orchestrator | 19:16:50.166 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.166479 | orchestrator | 19:16:50.166 STDOUT terraform:  } 2025-06-02 19:16:50.166552 | orchestrator | 19:16:50.166 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-02 19:16:50.166627 | orchestrator | 19:16:50.166 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:50.166695 | orchestrator | 19:16:50.166 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.166778 | orchestrator | 19:16:50.166 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.166826 | orchestrator | 19:16:50.166 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.166882 | orchestrator | 19:16:50.166 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.166940 | orchestrator | 19:16:50.166 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.167012 | orchestrator | 19:16:50.166 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-02 19:16:50.167071 | orchestrator | 19:16:50.167 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.167105 | orchestrator | 19:16:50.167 STDOUT terraform:  + size = 80 2025-06-02 19:16:50.167144 | orchestrator | 19:16:50.167 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.167183 | orchestrator | 19:16:50.167 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.167206 | orchestrator | 19:16:50.167 STDOUT terraform:  } 2025-06-02 19:16:50.167279 | orchestrator | 19:16:50.167 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-02 19:16:50.167348 | orchestrator | 19:16:50.167 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:50.167397 | orchestrator | 19:16:50.167 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.167430 | orchestrator | 19:16:50.167 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.167483 | orchestrator | 19:16:50.167 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.167534 | orchestrator | 19:16:50.167 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.167586 | orchestrator | 19:16:50.167 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.167649 | orchestrator | 19:16:50.167 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-02 19:16:50.167701 | orchestrator | 19:16:50.167 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.167731 | orchestrator | 19:16:50.167 STDOUT terraform:  + size = 80 2025-06-02 19:16:50.167832 | orchestrator | 19:16:50.167 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.167841 | orchestrator | 19:16:50.167 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.167846 | orchestrator | 19:16:50.167 STDOUT terraform:  } 2025-06-02 19:16:50.167920 | orchestrator | 19:16:50.167 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-02 19:16:50.167983 | orchestrator | 19:16:50.167 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:50.168034 | orchestrator | 19:16:50.167 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.168071 | orchestrator | 19:16:50.168 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.168121 | orchestrator | 19:16:50.168 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.168174 | orchestrator | 19:16:50.168 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.168224 | orchestrator | 19:16:50.168 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.168287 | orchestrator | 19:16:50.168 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-02 19:16:50.168337 | orchestrator | 19:16:50.168 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.168367 | orchestrator | 19:16:50.168 STDOUT terraform:  + size = 80 2025-06-02 19:16:50.168401 | orchestrator | 19:16:50.168 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.168437 | orchestrator | 19:16:50.168 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.168446 | orchestrator | 19:16:50.168 STDOUT terraform:  } 2025-06-02 19:16:50.168514 | orchestrator | 19:16:50.168 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-02 19:16:50.168577 | orchestrator | 19:16:50.168 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:50.168628 | orchestrator | 19:16:50.168 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.168663 | orchestrator | 19:16:50.168 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.168713 | orchestrator | 19:16:50.168 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.168779 | orchestrator | 19:16:50.168 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.168833 | orchestrator | 19:16:50.168 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-02 19:16:50.168883 | orchestrator | 19:16:50.168 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.168916 | orchestrator | 19:16:50.168 STDOUT terraform:  + size = 20 2025-06-02 19:16:50.168947 | orchestrator | 19:16:50.168 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.168983 | orchestrator | 19:16:50.168 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.168991 | orchestrator | 19:16:50.168 STDOUT terraform:  } 2025-06-02 19:16:50.169060 | orchestrator | 19:16:50.168 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-02 19:16:50.169126 | orchestrator | 19:16:50.169 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:50.169173 | orchestrator | 19:16:50.169 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.169208 | orchestrator | 19:16:50.169 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.169261 | orchestrator | 19:16:50.169 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.169311 | orchestrator | 19:16:50.169 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.169368 | orchestrator | 19:16:50.169 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-02 19:16:50.169419 | orchestrator | 19:16:50.169 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.169449 | orchestrator | 19:16:50.169 STDOUT terraform:  + size = 20 2025-06-02 19:16:50.169485 | orchestrator | 19:16:50.169 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.169526 | orchestrator | 19:16:50.169 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.169533 | orchestrator | 19:16:50.169 STDOUT terraform:  } 2025-06-02 19:16:50.169607 | orchestrator | 19:16:50.169 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-02 19:16:50.169663 | orchestrator | 19:16:50.169 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:50.169714 | orchestrator | 19:16:50.169 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.169784 | orchestrator | 19:16:50.169 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.169837 | orchestrator | 19:16:50.169 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.169888 | orchestrator | 19:16:50.169 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.169943 | orchestrator | 19:16:50.169 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-02 19:16:50.169994 | orchestrator | 19:16:50.169 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.170042 | orchestrator | 19:16:50.169 STDOUT terraform:  + size = 20 2025-06-02 19:16:50.170078 | orchestrator | 19:16:50.170 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.170113 | orchestrator | 19:16:50.170 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.170134 | orchestrator | 19:16:50.170 STDOUT terraform:  } 2025-06-02 19:16:50.170195 | orchestrator | 19:16:50.170 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-02 19:16:50.170258 | orchestrator | 19:16:50.170 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:50.170309 | orchestrator | 19:16:50.170 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.170343 | orchestrator | 19:16:50.170 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.170396 | orchestrator | 19:16:50.170 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.170469 | orchestrator | 19:16:50.170 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.170525 | orchestrator | 19:16:50.170 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-02 19:16:50.170578 | orchestrator | 19:16:50.170 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.170609 | orchestrator | 19:16:50.170 STDOUT terraform:  + size = 20 2025-06-02 19:16:50.170648 | orchestrator | 19:16:50.170 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.170678 | orchestrator | 19:16:50.170 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.170701 | orchestrator | 19:16:50.170 STDOUT terraform:  } 2025-06-02 19:16:50.170851 | orchestrator | 19:16:50.170 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-02 19:16:50.170923 | orchestrator | 19:16:50.170 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:50.170974 | orchestrator | 19:16:50.170 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.171012 | orchestrator | 19:16:50.170 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.171063 | orchestrator | 19:16:50.171 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.171114 | orchestrator | 19:16:50.171 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.171169 | orchestrator | 19:16:50.171 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-02 19:16:50.171221 | orchestrator | 19:16:50.171 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.171252 | orchestrator | 19:16:50.171 STDOUT terraform:  + size = 20 2025-06-02 19:16:50.171288 | orchestrator | 19:16:50.171 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.171319 | orchestrator | 19:16:50.171 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.171333 | orchestrator | 19:16:50.171 STDOUT terraform:  } 2025-06-02 19:16:50.171391 | orchestrator | 19:16:50.171 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-02 19:16:50.171446 | orchestrator | 19:16:50.171 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:50.171490 | orchestrator | 19:16:50.171 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.171520 | orchestrator | 19:16:50.171 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.171569 | orchestrator | 19:16:50.171 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.171615 | orchestrator | 19:16:50.171 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.171665 | orchestrator | 19:16:50.171 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-02 19:16:50.171711 | orchestrator | 19:16:50.171 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.171766 | orchestrator | 19:16:50.171 STDOUT terraform:  + size = 20 2025-06-02 19:16:50.171789 | orchestrator | 19:16:50.171 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.171822 | orchestrator | 19:16:50.171 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.171841 | orchestrator | 19:16:50.171 STDOUT terraform:  } 2025-06-02 19:16:50.171899 | orchestrator | 19:16:50.171 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-02 19:16:50.171954 | orchestrator | 19:16:50.171 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:50.171999 | orchestrator | 19:16:50.171 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.172034 | orchestrator | 19:16:50.171 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.172080 | orchestrator | 19:16:50.172 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.172124 | orchestrator | 19:16:50.172 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.172173 | orchestrator | 19:16:50.172 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-02 19:16:50.172220 | orchestrator | 19:16:50.172 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.172247 | orchestrator | 19:16:50.172 STDOUT terraform:  + size = 20 2025-06-02 19:16:50.172285 | orchestrator | 19:16:50.172 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.172311 | orchestrator | 19:16:50.172 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.172318 | orchestrator | 19:16:50.172 STDOUT terraform:  } 2025-06-02 19:16:50.172379 | orchestrator | 19:16:50.172 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-02 19:16:50.172434 | orchestrator | 19:16:50.172 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:50.172479 | orchestrator | 19:16:50.172 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.172511 | orchestrator | 19:16:50.172 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.172559 | orchestrator | 19:16:50.172 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.172603 | orchestrator | 19:16:50.172 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.172652 | orchestrator | 19:16:50.172 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-02 19:16:50.172699 | orchestrator | 19:16:50.172 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.172729 | orchestrator | 19:16:50.172 STDOUT terraform:  + size = 20 2025-06-02 19:16:50.172786 | orchestrator | 19:16:50.172 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.172794 | orchestrator | 19:16:50.172 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.172819 | orchestrator | 19:16:50.172 STDOUT terraform:  } 2025-06-02 19:16:50.172877 | orchestrator | 19:16:50.172 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-02 19:16:50.172931 | orchestrator | 19:16:50.172 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:50.172976 | orchestrator | 19:16:50.172 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:50.173008 | orchestrator | 19:16:50.172 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.173062 | orchestrator | 19:16:50.173 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.173108 | orchestrator | 19:16:50.173 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:50.173155 | orchestrator | 19:16:50.173 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-02 19:16:50.173204 | orchestrator | 19:16:50.173 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.173230 | orchestrator | 19:16:50.173 STDOUT terraform:  + size = 20 2025-06-02 19:16:50.173260 | orchestrator | 19:16:50.173 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:50.173292 | orchestrator | 19:16:50.173 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:50.173299 | orchestrator | 19:16:50.173 STDOUT terraform:  } 2025-06-02 19:16:50.173361 | orchestrator | 19:16:50.173 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-02 19:16:50.173415 | orchestrator | 19:16:50.173 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-02 19:16:50.173458 | orchestrator | 19:16:50.173 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:50.173504 | orchestrator | 19:16:50.173 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:50.173546 | orchestrator | 19:16:50.173 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:50.173591 | orchestrator | 19:16:50.173 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.173623 | orchestrator | 19:16:50.173 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.173650 | orchestrator | 19:16:50.173 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:50.173695 | orchestrator | 19:16:50.173 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:50.173752 | orchestrator | 19:16:50.173 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:50.173795 | orchestrator | 19:16:50.173 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-02 19:16:50.173825 | orchestrator | 19:16:50.173 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:50.173870 | orchestrator | 19:16:50.173 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:50.173916 | orchestrator | 19:16:50.173 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.173969 | orchestrator | 19:16:50.173 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.174040 | orchestrator | 19:16:50.173 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:50.174051 | orchestrator | 19:16:50.174 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:50.174096 | orchestrator | 19:16:50.174 STDOUT terraform:  + name = "testbed-manager" 2025-06-02 19:16:50.174129 | orchestrator | 19:16:50.174 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:50.174174 | orchestrator | 19:16:50.174 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.174218 | orchestrator | 19:16:50.174 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:50.174252 | orchestrator | 19:16:50.174 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:50.174294 | orchestrator | 19:16:50.174 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:50.174340 | orchestrator | 19:16:50.174 STDOUT terraform:  + user_data = (known after apply) 2025-06-02 19:16:50.174363 | orchestrator | 19:16:50.174 STDOUT terraform:  + block_device { 2025-06-02 19:16:50.174394 | orchestrator | 19:16:50.174 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:50.174431 | orchestrator | 19:16:50.174 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:50.174467 | orchestrator | 19:16:50.174 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:50.174504 | orchestrator | 19:16:50.174 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:50.174543 | orchestrator | 19:16:50.174 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:50.174592 | orchestrator | 19:16:50.174 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.174600 | orchestrator | 19:16:50.174 STDOUT terraform:  } 2025-06-02 19:16:50.174625 | orchestrator | 19:16:50.174 STDOUT terraform:  + network { 2025-06-02 19:16:50.174654 | orchestrator | 19:16:50.174 STDOUT terraform:  + access_network = false 2025-06-02 19:16:50.174694 | orchestrator | 19:16:50.174 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:50.174733 | orchestrator | 19:16:50.174 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:50.174804 | orchestrator | 19:16:50.174 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:50.174847 | orchestrator | 19:16:50.174 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:50.174890 | orchestrator | 19:16:50.174 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:50.174927 | orchestrator | 19:16:50.174 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.174940 | orchestrator | 19:16:50.174 STDOUT terraform:  } 2025-06-02 19:16:50.174947 | orchestrator | 19:16:50.174 STDOUT terraform:  } 2025-06-02 19:16:50.175008 | orchestrator | 19:16:50.174 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-02 19:16:50.175059 | orchestrator | 19:16:50.175 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:50.175104 | orchestrator | 19:16:50.175 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:50.175149 | orchestrator | 19:16:50.175 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:50.175195 | orchestrator | 19:16:50.175 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:50.175240 | orchestrator | 19:16:50.175 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.175271 | orchestrator | 19:16:50.175 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.175300 | orchestrator | 19:16:50.175 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:50.175345 | orchestrator | 19:16:50.175 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:50.175392 | orchestrator | 19:16:50.175 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:50.175441 | orchestrator | 19:16:50.175 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:50.175474 | orchestrator | 19:16:50.175 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:50.175517 | orchestrator | 19:16:50.175 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:50.175562 | orchestrator | 19:16:50.175 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.175607 | orchestrator | 19:16:50.175 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.175652 | orchestrator | 19:16:50.175 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:50.175683 | orchestrator | 19:16:50.175 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:50.175724 | orchestrator | 19:16:50.175 STDOUT terraform:  + name = "testbed-node-0" 2025-06-02 19:16:50.175776 | orchestrator | 19:16:50.175 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:50.175832 | orchestrator | 19:16:50.175 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.175869 | orchestrator | 19:16:50.175 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:50.175897 | orchestrator | 19:16:50.175 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:50.175945 | orchestrator | 19:16:50.175 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:50.176019 | orchestrator | 19:16:50.175 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:50.176043 | orchestrator | 19:16:50.176 STDOUT terraform:  + block_device { 2025-06-02 19:16:50.176074 | orchestrator | 19:16:50.176 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:50.176113 | orchestrator | 19:16:50.176 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:50.176156 | orchestrator | 19:16:50.176 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:50.176195 | orchestrator | 19:16:50.176 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:50.176239 | orchestrator | 19:16:50.176 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:50.176301 | orchestrator | 19:16:50.176 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.176308 | orchestrator | 19:16:50.176 STDOUT terraform:  } 2025-06-02 19:16:50.176338 | orchestrator | 19:16:50.176 STDOUT terraform:  + network { 2025-06-02 19:16:50.176368 | orchestrator | 19:16:50.176 STDOUT terraform:  + access_network = false 2025-06-02 19:16:50.176414 | orchestrator | 19:16:50.176 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:50.176456 | orchestrator | 19:16:50.176 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:50.176500 | orchestrator | 19:16:50.176 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:50.176546 | orchestrator | 19:16:50.176 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:50.176589 | orchestrator | 19:16:50.176 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:50.176634 | orchestrator | 19:16:50.176 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.176656 | orchestrator | 19:16:50.176 STDOUT terraform:  } 2025-06-02 19:16:50.176677 | orchestrator | 19:16:50.176 STDOUT terraform:  } 2025-06-02 19:16:50.176776 | orchestrator | 19:16:50.176 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-02 19:16:50.176828 | orchestrator | 19:16:50.176 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:50.176880 | orchestrator | 19:16:50.176 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:50.176929 | orchestrator | 19:16:50.176 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:50.176980 | orchestrator | 19:16:50.176 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:50.177029 | orchestrator | 19:16:50.176 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.177065 | orchestrator | 19:16:50.177 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.177095 | orchestrator | 19:16:50.177 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:50.177144 | orchestrator | 19:16:50.177 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:50.177195 | orchestrator | 19:16:50.177 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:50.177238 | orchestrator | 19:16:50.177 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:50.177272 | orchestrator | 19:16:50.177 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:50.177324 | orchestrator | 19:16:50.177 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:50.177375 | orchestrator | 19:16:50.177 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.177426 | orchestrator | 19:16:50.177 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.177475 | orchestrator | 19:16:50.177 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:50.177510 | orchestrator | 19:16:50.177 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:50.177555 | orchestrator | 19:16:50.177 STDOUT terraform:  + name = "testbed-node-1" 2025-06-02 19:16:50.177591 | orchestrator | 19:16:50.177 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:50.177643 | orchestrator | 19:16:50.177 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.177693 | orchestrator | 19:16:50.177 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:50.177726 | orchestrator | 19:16:50.177 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:50.177800 | orchestrator | 19:16:50.177 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:50.177880 | orchestrator | 19:16:50.177 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:50.177889 | orchestrator | 19:16:50.177 STDOUT terraform:  + block_device { 2025-06-02 19:16:50.177926 | orchestrator | 19:16:50.177 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:50.177966 | orchestrator | 19:16:50.177 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:50.178008 | orchestrator | 19:16:50.177 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:50.178073 | orchestrator | 19:16:50.178 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:50.178123 | orchestrator | 19:16:50.178 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:50.178174 | orchestrator | 19:16:50.178 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.178197 | orchestrator | 19:16:50.178 STDOUT terraform:  } 2025-06-02 19:16:50.178206 | orchestrator | 19:16:50.178 STDOUT terraform:  + network { 2025-06-02 19:16:50.178241 | orchestrator | 19:16:50.178 STDOUT terraform:  + access_network = false 2025-06-02 19:16:50.178285 | orchestrator | 19:16:50.178 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:50.178331 | orchestrator | 19:16:50.178 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:50.178372 | orchestrator | 19:16:50.178 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:50.178419 | orchestrator | 19:16:50.178 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:50.178462 | orchestrator | 19:16:50.178 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:50.178507 | orchestrator | 19:16:50.178 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.178528 | orchestrator | 19:16:50.178 STDOUT terraform:  } 2025-06-02 19:16:50.178551 | orchestrator | 19:16:50.178 STDOUT terraform:  } 2025-06-02 19:16:50.178611 | orchestrator | 19:16:50.178 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-02 19:16:50.178671 | orchestrator | 19:16:50.178 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:50.178720 | orchestrator | 19:16:50.178 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:50.178786 | orchestrator | 19:16:50.178 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:50.178840 | orchestrator | 19:16:50.178 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:50.178887 | orchestrator | 19:16:50.178 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.178923 | orchestrator | 19:16:50.178 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.178951 | orchestrator | 19:16:50.178 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:50.179003 | orchestrator | 19:16:50.178 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:50.179055 | orchestrator | 19:16:50.178 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:50.179098 | orchestrator | 19:16:50.179 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:50.179130 | orchestrator | 19:16:50.179 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:50.179179 | orchestrator | 19:16:50.179 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:50.179230 | orchestrator | 19:16:50.179 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.179281 | orchestrator | 19:16:50.179 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.179330 | orchestrator | 19:16:50.179 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:50.179369 | orchestrator | 19:16:50.179 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:50.179412 | orchestrator | 19:16:50.179 STDOUT terraform:  + name = "testbed-node-2" 2025-06-02 19:16:50.179448 | orchestrator | 19:16:50.179 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:50.179499 | orchestrator | 19:16:50.179 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.179548 | orchestrator | 19:16:50.179 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:50.179580 | orchestrator | 19:16:50.179 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:50.179632 | orchestrator | 19:16:50.179 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:50.179706 | orchestrator | 19:16:50.179 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:50.179714 | orchestrator | 19:16:50.179 STDOUT terraform:  + block_device { 2025-06-02 19:16:50.179793 | orchestrator | 19:16:50.179 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:50.179835 | orchestrator | 19:16:50.179 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:50.179879 | orchestrator | 19:16:50.179 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:50.179920 | orchestrator | 19:16:50.179 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:50.179965 | orchestrator | 19:16:50.179 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:50.180024 | orchestrator | 19:16:50.179 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.180034 | orchestrator | 19:16:50.180 STDOUT terraform:  } 2025-06-02 19:16:50.180064 | orchestrator | 19:16:50.180 STDOUT terraform:  + network { 2025-06-02 19:16:50.180096 | orchestrator | 19:16:50.180 STDOUT terraform:  + access_network = false 2025-06-02 19:16:50.180144 | orchestrator | 19:16:50.180 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:50.180192 | orchestrator | 19:16:50.180 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:50.180241 | orchestrator | 19:16:50.180 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:50.180288 | orchestrator | 19:16:50.180 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:50.180335 | orchestrator | 19:16:50.180 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:50.180383 | orchestrator | 19:16:50.180 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.180393 | orchestrator | 19:16:50.180 STDOUT terraform:  } 2025-06-02 19:16:50.180425 | orchestrator | 19:16:50.180 STDOUT terraform:  } 2025-06-02 19:16:50.180492 | orchestrator | 19:16:50.180 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-02 19:16:50.180556 | orchestrator | 19:16:50.180 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:50.180607 | orchestrator | 19:16:50.180 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:50.180661 | orchestrator | 19:16:50.180 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:50.180720 | orchestrator | 19:16:50.180 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:50.180798 | orchestrator | 19:16:50.180 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.180833 | orchestrator | 19:16:50.180 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.180867 | orchestrator | 19:16:50.180 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:50.180920 | orchestrator | 19:16:50.180 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:50.180973 | orchestrator | 19:16:50.180 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:50.181017 | orchestrator | 19:16:50.180 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:50.181052 | orchestrator | 19:16:50.181 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:50.181104 | orchestrator | 19:16:50.181 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:50.181160 | orchestrator | 19:16:50.181 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.181214 | orchestrator | 19:16:50.181 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.181266 | orchestrator | 19:16:50.181 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:50.181303 | orchestrator | 19:16:50.181 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:50.181347 | orchestrator | 19:16:50.181 STDOUT terraform:  + name = "testbed-node-3" 2025-06-02 19:16:50.181385 | orchestrator | 19:16:50.181 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:50.181436 | orchestrator | 19:16:50.181 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.181484 | orchestrator | 19:16:50.181 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:50.181518 | orchestrator | 19:16:50.181 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:50.181572 | orchestrator | 19:16:50.181 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:50.181640 | orchestrator | 19:16:50.181 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:50.181648 | orchestrator | 19:16:50.181 STDOUT terraform:  + block_device { 2025-06-02 19:16:50.181692 | orchestrator | 19:16:50.181 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:50.181730 | orchestrator | 19:16:50.181 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:50.181783 | orchestrator | 19:16:50.181 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:50.181820 | orchestrator | 19:16:50.181 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:50.181864 | orchestrator | 19:16:50.181 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:50.181920 | orchestrator | 19:16:50.181 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.181927 | orchestrator | 19:16:50.181 STDOUT terraform:  } 2025-06-02 19:16:50.181957 | orchestrator | 19:16:50.181 STDOUT terraform:  + network { 2025-06-02 19:16:50.181988 | orchestrator | 19:16:50.181 STDOUT terraform:  + access_network = false 2025-06-02 19:16:50.182076 | orchestrator | 19:16:50.181 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:50.182084 | orchestrator | 19:16:50.182 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:50.182139 | orchestrator | 19:16:50.182 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:50.182184 | orchestrator | 19:16:50.182 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:50.182603 | orchestrator | 19:16:50.182 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:50.182615 | orchestrator | 19:16:50.182 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.182635 | orchestrator | 19:16:50.182 STDOUT terraform:  } 2025-06-02 19:16:50.182643 | orchestrator | 19:16:50.182 STDOUT terraform:  } 2025-06-02 19:16:50.182724 | orchestrator | 19:16:50.182 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-02 19:16:50.182819 | orchestrator | 19:16:50.182 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:50.182862 | orchestrator | 19:16:50.182 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:50.182923 | orchestrator | 19:16:50.182 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:50.182967 | orchestrator | 19:16:50.182 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:50.183018 | orchestrator | 19:16:50.182 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.183052 | orchestrator | 19:16:50.183 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.183074 | orchestrator | 19:16:50.183 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:50.183120 | orchestrator | 19:16:50.183 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:50.183167 | orchestrator | 19:16:50.183 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:50.183207 | orchestrator | 19:16:50.183 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:50.183229 | orchestrator | 19:16:50.183 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:50.183274 | orchestrator | 19:16:50.183 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:50.183320 | orchestrator | 19:16:50.183 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.183365 | orchestrator | 19:16:50.183 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.183415 | orchestrator | 19:16:50.183 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:50.183437 | orchestrator | 19:16:50.183 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:50.183480 | orchestrator | 19:16:50.183 STDOUT terraform:  + name = "testbed-node-4" 2025-06-02 19:16:50.183511 | orchestrator | 19:16:50.183 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:50.183556 | orchestrator | 19:16:50.183 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.183601 | orchestrator | 19:16:50.183 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:50.183631 | orchestrator | 19:16:50.183 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:50.183676 | orchestrator | 19:16:50.183 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:50.183765 | orchestrator | 19:16:50.183 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:50.183771 | orchestrator | 19:16:50.183 STDOUT terraform:  + block_device { 2025-06-02 19:16:50.183800 | orchestrator | 19:16:50.183 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:50.183839 | orchestrator | 19:16:50.183 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:50.183877 | orchestrator | 19:16:50.183 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:50.183918 | orchestrator | 19:16:50.183 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:50.183951 | orchestrator | 19:16:50.183 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:50.184002 | orchestrator | 19:16:50.183 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.184009 | orchestrator | 19:16:50.183 STDOUT terraform:  } 2025-06-02 19:16:50.184030 | orchestrator | 19:16:50.184 STDOUT terraform:  + network { 2025-06-02 19:16:50.184052 | orchestrator | 19:16:50.184 STDOUT terraform:  + access_network = false 2025-06-02 19:16:50.184091 | orchestrator | 19:16:50.184 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:50.184131 | orchestrator | 19:16:50.184 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:50.184172 | orchestrator | 19:16:50.184 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:50.184212 | orchestrator | 19:16:50.184 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:50.184252 | orchestrator | 19:16:50.184 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:50.184292 | orchestrator | 19:16:50.184 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.184303 | orchestrator | 19:16:50.184 STDOUT terraform:  } 2025-06-02 19:16:50.184334 | orchestrator | 19:16:50.184 STDOUT terraform:  } 2025-06-02 19:16:50.184390 | orchestrator | 19:16:50.184 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-02 19:16:50.184445 | orchestrator | 19:16:50.184 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:50.184490 | orchestrator | 19:16:50.184 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:50.184534 | orchestrator | 19:16:50.184 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:50.184578 | orchestrator | 19:16:50.184 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:50.184623 | orchestrator | 19:16:50.184 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.184653 | orchestrator | 19:16:50.184 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:50.184674 | orchestrator | 19:16:50.184 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:50.184719 | orchestrator | 19:16:50.184 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:50.184809 | orchestrator | 19:16:50.184 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:50.184818 | orchestrator | 19:16:50.184 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:50.184858 | orchestrator | 19:16:50.184 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:50.184901 | orchestrator | 19:16:50.184 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:50.184948 | orchestrator | 19:16:50.184 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.184998 | orchestrator | 19:16:50.184 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:50.185037 | orchestrator | 19:16:50.184 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:50.185067 | orchestrator | 19:16:50.185 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:50.185105 | orchestrator | 19:16:50.185 STDOUT terraform:  + name = "testbed-node-5" 2025-06-02 19:16:50.185551 | orchestrator | 19:16:50.185 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:50.185609 | orchestrator | 19:16:50.185 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.185710 | orchestrator | 19:16:50.185 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:50.185770 | orchestrator | 19:16:50.185 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:50.185819 | orchestrator | 19:16:50.185 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:50.185866 | orchestrator | 19:16:50.185 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:50.185921 | orchestrator | 19:16:50.185 STDOUT terraform:  + block_device { 2025-06-02 19:16:50.185941 | orchestrator | 19:16:50.185 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:50.192804 | orchestrator | 19:16:50.185 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:50.192832 | orchestrator | 19:16:50.185 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:50.192845 | orchestrator | 19:16:50.186 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:50.192849 | orchestrator | 19:16:50.186 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:50.192854 | orchestrator | 19:16:50.186 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.192858 | orchestrator | 19:16:50.186 STDOUT terraform:  } 2025-06-02 19:16:50.192862 | orchestrator | 19:16:50.186 STDOUT terraform:  + network { 2025-06-02 19:16:50.192866 | orchestrator | 19:16:50.186 STDOUT terraform:  + access_network = false 2025-06-02 19:16:50.192870 | orchestrator | 19:16:50.186 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:50.192874 | orchestrator | 19:16:50.186 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:50.192878 | orchestrator | 19:16:50.186 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:50.192882 | orchestrator | 19:16:50.186 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:50.192886 | orchestrator | 19:16:50.186 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:50.192890 | orchestrator | 19:16:50.186 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:50.192894 | orchestrator | 19:16:50.186 STDOUT terraform:  } 2025-06-02 19:16:50.192898 | orchestrator | 19:16:50.186 STDOUT terraform:  } 2025-06-02 19:16:50.192901 | orchestrator | 19:16:50.186 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-02 19:16:50.192905 | orchestrator | 19:16:50.186 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-02 19:16:50.192909 | orchestrator | 19:16:50.186 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-02 19:16:50.192913 | orchestrator | 19:16:50.186 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.192916 | orchestrator | 19:16:50.186 STDOUT terraform:  + name = "testbed" 2025-06-02 19:16:50.192920 | orchestrator | 19:16:50.186 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 19:16:50.192924 | orchestrator | 19:16:50.186 STDOUT terraform:  + public_key = (known after apply) 2025-06-02 19:16:50.192928 | orchestrator | 19:16:50.186 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.192931 | orchestrator | 19:16:50.186 STDOUT terraform:  + user_id = (known after apply) 2025-06-02 19:16:50.192935 | orchestrator | 19:16:50.186 STDOUT terraform:  } 2025-06-02 19:16:50.192939 | orchestrator | 19:16:50.186 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-02 19:16:50.192944 | orchestrator | 19:16:50.186 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:50.192948 | orchestrator | 19:16:50.186 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:50.192953 | orchestrator | 19:16:50.186 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.192957 | orchestrator | 19:16:50.187 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:50.192961 | orchestrator | 19:16:50.187 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.192964 | orchestrator | 19:16:50.187 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:50.192972 | orchestrator | 19:16:50.187 STDOUT terraform:  } 2025-06-02 19:16:50.192976 | orchestrator | 19:16:50.187 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-02 19:16:50.192984 | orchestrator | 19:16:50.187 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:50.192988 | orchestrator | 19:16:50.187 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:50.192996 | orchestrator | 19:16:50.187 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.193000 | orchestrator | 19:16:50.192 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:50.193011 | orchestrator | 19:16:50.192 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.193014 | orchestrator | 19:16:50.192 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:50.193018 | orchestrator | 19:16:50.192 STDOUT terraform:  } 2025-06-02 19:16:50.196213 | orchestrator | 19:16:50.192 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-02 19:16:50.196243 | orchestrator | 19:16:50.193 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:50.196249 | orchestrator | 19:16:50.193 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:50.196253 | orchestrator | 19:16:50.193 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196257 | orchestrator | 19:16:50.193 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:50.196260 | orchestrator | 19:16:50.193 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196264 | orchestrator | 19:16:50.193 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:50.196269 | orchestrator | 19:16:50.193 STDOUT terraform:  } 2025-06-02 19:16:50.196272 | orchestrator | 19:16:50.193 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-02 19:16:50.196276 | orchestrator | 19:16:50.193 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:50.196280 | orchestrator | 19:16:50.193 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:50.196284 | orchestrator | 19:16:50.193 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196288 | orchestrator | 19:16:50.193 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:50.196291 | orchestrator | 19:16:50.193 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196295 | orchestrator | 19:16:50.193 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:50.196299 | orchestrator | 19:16:50.193 STDOUT terraform:  } 2025-06-02 19:16:50.196303 | orchestrator | 19:16:50.193 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-02 19:16:50.196307 | orchestrator | 19:16:50.193 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:50.196310 | orchestrator | 19:16:50.193 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:50.196314 | orchestrator | 19:16:50.193 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196328 | orchestrator | 19:16:50.193 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:50.196332 | orchestrator | 19:16:50.193 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196342 | orchestrator | 19:16:50.193 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:50.196346 | orchestrator | 19:16:50.193 STDOUT terraform:  } 2025-06-02 19:16:50.196350 | orchestrator | 19:16:50.193 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-02 19:16:50.196353 | orchestrator | 19:16:50.193 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:50.196357 | orchestrator | 19:16:50.193 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:50.196361 | orchestrator | 19:16:50.193 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196365 | orchestrator | 19:16:50.193 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:50.196368 | orchestrator | 19:16:50.193 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196372 | orchestrator | 19:16:50.193 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:50.196376 | orchestrator | 19:16:50.193 STDOUT terraform:  } 2025-06-02 19:16:50.196380 | orchestrator | 19:16:50.193 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-02 19:16:50.196384 | orchestrator | 19:16:50.194 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:50.196388 | orchestrator | 19:16:50.194 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:50.196392 | orchestrator | 19:16:50.194 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196405 | orchestrator | 19:16:50.194 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:50.196409 | orchestrator | 19:16:50.194 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196413 | orchestrator | 19:16:50.194 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:50.196417 | orchestrator | 19:16:50.194 STDOUT terraform:  } 2025-06-02 19:16:50.196421 | orchestrator | 19:16:50.194 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-02 19:16:50.196425 | orchestrator | 19:16:50.194 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:50.196429 | orchestrator | 19:16:50.194 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:50.196432 | orchestrator | 19:16:50.194 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196436 | orchestrator | 19:16:50.194 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:50.196440 | orchestrator | 19:16:50.194 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196444 | orchestrator | 19:16:50.194 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:50.196448 | orchestrator | 19:16:50.194 STDOUT terraform:  } 2025-06-02 19:16:50.196452 | orchestrator | 19:16:50.194 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-02 19:16:50.196464 | orchestrator | 19:16:50.194 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:50.196468 | orchestrator | 19:16:50.194 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:50.196472 | orchestrator | 19:16:50.194 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196476 | orchestrator | 19:16:50.194 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:50.196479 | orchestrator | 19:16:50.194 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196483 | orchestrator | 19:16:50.194 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:50.196487 | orchestrator | 19:16:50.194 STDOUT terraform:  } 2025-06-02 19:16:50.196491 | orchestrator | 19:16:50.194 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-02 19:16:50.196495 | orchestrator | 19:16:50.194 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-02 19:16:50.196500 | orchestrator | 19:16:50.194 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 19:16:50.196503 | orchestrator | 19:16:50.194 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-02 19:16:50.196507 | orchestrator | 19:16:50.194 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196511 | orchestrator | 19:16:50.194 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 19:16:50.196515 | orchestrator | 19:16:50.194 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196518 | orchestrator | 19:16:50.194 STDOUT terraform:  } 2025-06-02 19:16:50.196522 | orchestrator | 19:16:50.194 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-02 19:16:50.196526 | orchestrator | 19:16:50.194 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-02 19:16:50.196530 | orchestrator | 19:16:50.195 STDOUT terraform:  + address = (known after apply) 2025-06-02 19:16:50.196534 | orchestrator | 19:16:50.195 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.196538 | orchestrator | 19:16:50.195 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 19:16:50.196542 | orchestrator | 19:16:50.195 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:50.196546 | orchestrator | 19:16:50.195 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 19:16:50.196549 | orchestrator | 19:16:50.195 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196553 | orchestrator | 19:16:50.195 STDOUT terraform:  + pool = "public" 2025-06-02 19:16:50.196564 | orchestrator | 19:16:50.195 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 19:16:50.196568 | orchestrator | 19:16:50.195 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196576 | orchestrator | 19:16:50.195 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:50.196579 | orchestrator | 19:16:50.195 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.196583 | orchestrator | 19:16:50.195 STDOUT terraform:  } 2025-06-02 19:16:50.196587 | orchestrator | 19:16:50.195 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-02 19:16:50.196595 | orchestrator | 19:16:50.195 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-02 19:16:50.196599 | orchestrator | 19:16:50.195 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:50.196603 | orchestrator | 19:16:50.195 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.196607 | orchestrator | 19:16:50.195 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 19:16:50.196610 | orchestrator | 19:16:50.195 STDOUT terraform:  + "nova", 2025-06-02 19:16:50.196614 | orchestrator | 19:16:50.195 STDOUT terraform:  ] 2025-06-02 19:16:50.196618 | orchestrator | 19:16:50.195 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 19:16:50.196622 | orchestrator | 19:16:50.195 STDOUT terraform:  + external = (known after apply) 2025-06-02 19:16:50.196626 | orchestrator | 19:16:50.195 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196630 | orchestrator | 19:16:50.195 STDOUT terraform:  + mtu = (known after apply) 2025-06-02 19:16:50.196633 | orchestrator | 19:16:50.195 STDOUT terraform:  + name = "net-testbed-management" 2025-06-02 19:16:50.196637 | orchestrator | 19:16:50.195 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:50.196641 | orchestrator | 19:16:50.195 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:50.196645 | orchestrator | 19:16:50.195 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196648 | orchestrator | 19:16:50.195 STDOUT terraform:  + shared = (known after apply) 2025-06-02 19:16:50.196652 | orchestrator | 19:16:50.195 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.196656 | orchestrator | 19:16:50.195 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-02 19:16:50.196662 | orchestrator | 19:16:50.195 STDOUT terraform:  + segments (known after apply) 2025-06-02 19:16:50.196666 | orchestrator | 19:16:50.195 STDOUT terraform:  } 2025-06-02 19:16:50.196670 | orchestrator | 19:16:50.195 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-02 19:16:50.196674 | orchestrator | 19:16:50.195 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-02 19:16:50.196678 | orchestrator | 19:16:50.196 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:50.196681 | orchestrator | 19:16:50.196 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:50.196685 | orchestrator | 19:16:50.196 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:50.196689 | orchestrator | 19:16:50.196 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.196693 | orchestrator | 19:16:50.196 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:50.196697 | orchestrator | 19:16:50.196 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:50.196701 | orchestrator | 19:16:50.196 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:50.196705 | orchestrator | 19:16:50.196 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:50.196712 | orchestrator | 19:16:50.196 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.196716 | orchestrator | 19:16:50.196 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:50.196724 | orchestrator | 19:16:50.196 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:50.196728 | orchestrator | 19:16:50.196 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:50.196732 | orchestrator | 19:16:50.196 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:50.196736 | orchestrator | 19:16:50.196 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.196755 | orchestrator | 19:16:50.196 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:50.196759 | orchestrator | 19:16:50.196 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.196763 | orchestrator | 19:16:50.196 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.196767 | orchestrator | 19:16:50.196 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:50.196770 | orchestrator | 19:16:50.196 STDOUT terraform:  } 2025-06-02 19:16:50.196774 | orchestrator | 19:16:50.196 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.196778 | orchestrator | 19:16:50.196 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:50.196782 | orchestrator | 19:16:50.196 STDOUT terraform:  } 2025-06-02 19:16:50.196785 | orchestrator | 19:16:50.196 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:50.196791 | orchestrator | 19:16:50.196 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:50.196797 | orchestrator | 19:16:50.196 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-02 19:16:50.196855 | orchestrator | 19:16:50.196 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:50.196862 | orchestrator | 19:16:50.196 STDOUT terraform:  } 2025-06-02 19:16:50.196880 | orchestrator | 19:16:50.196 STDOUT terraform:  } 2025-06-02 19:16:50.196969 | orchestrator | 19:16:50.196 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-02 19:16:50.196979 | orchestrator | 19:16:50.196 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:50.197045 | orchestrator | 19:16:50.196 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:50.197075 | orchestrator | 19:16:50.197 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:50.197110 | orchestrator | 19:16:50.197 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:50.197151 | orchestrator | 19:16:50.197 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.197192 | orchestrator | 19:16:50.197 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:50.197235 | orchestrator | 19:16:50.197 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:50.197271 | orchestrator | 19:16:50.197 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:50.197310 | orchestrator | 19:16:50.197 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:50.197353 | orchestrator | 19:16:50.197 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.197391 | orchestrator | 19:16:50.197 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:50.197440 | orchestrator | 19:16:50.197 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:50.197509 | orchestrator | 19:16:50.197 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:50.197721 | orchestrator | 19:16:50.197 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:50.197782 | orchestrator | 19:16:50.197 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.197822 | orchestrator | 19:16:50.197 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:50.197861 | orchestrator | 19:16:50.197 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.197899 | orchestrator | 19:16:50.197 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.197930 | orchestrator | 19:16:50.197 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:50.197938 | orchestrator | 19:16:50.197 STDOUT terraform:  } 2025-06-02 19:16:50.197967 | orchestrator | 19:16:50.197 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.197999 | orchestrator | 19:16:50.197 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:50.198006 | orchestrator | 19:16:50.197 STDOUT terraform:  } 2025-06-02 19:16:50.198061 | orchestrator | 19:16:50.198 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.198083 | orchestrator | 19:16:50.198 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:50.198091 | orchestrator | 19:16:50.198 STDOUT terraform:  } 2025-06-02 19:16:50.198119 | orchestrator | 19:16:50.198 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.198149 | orchestrator | 19:16:50.198 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:50.198156 | orchestrator | 19:16:50.198 STDOUT terraform:  } 2025-06-02 19:16:50.198189 | orchestrator | 19:16:50.198 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:50.198197 | orchestrator | 19:16:50.198 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:50.198230 | orchestrator | 19:16:50.198 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-02 19:16:50.198262 | orchestrator | 19:16:50.198 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:50.198270 | orchestrator | 19:16:50.198 STDOUT terraform:  } 2025-06-02 19:16:50.198278 | orchestrator | 19:16:50.198 STDOUT terraform:  } 2025-06-02 19:16:50.198339 | orchestrator | 19:16:50.198 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-02 19:16:50.198389 | orchestrator | 19:16:50.198 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:50.198426 | orchestrator | 19:16:50.198 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:50.198464 | orchestrator | 19:16:50.198 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:50.198503 | orchestrator | 19:16:50.198 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:50.198545 | orchestrator | 19:16:50.198 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.198579 | orchestrator | 19:16:50.198 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:50.198617 | orchestrator | 19:16:50.198 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:50.198661 | orchestrator | 19:16:50.198 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:50.198693 | orchestrator | 19:16:50.198 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:50.198732 | orchestrator | 19:16:50.198 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.198785 | orchestrator | 19:16:50.198 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:50.198829 | orchestrator | 19:16:50.198 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:50.198867 | orchestrator | 19:16:50.198 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:50.198908 | orchestrator | 19:16:50.198 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:50.198946 | orchestrator | 19:16:50.198 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.198984 | orchestrator | 19:16:50.198 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:50.199022 | orchestrator | 19:16:50.198 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.199042 | orchestrator | 19:16:50.199 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.199075 | orchestrator | 19:16:50.199 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:50.199082 | orchestrator | 19:16:50.199 STDOUT terraform:  } 2025-06-02 19:16:50.199109 | orchestrator | 19:16:50.199 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.199146 | orchestrator | 19:16:50.199 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:50.199153 | orchestrator | 19:16:50.199 STDOUT terraform:  } 2025-06-02 19:16:50.199173 | orchestrator | 19:16:50.199 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.199205 | orchestrator | 19:16:50.199 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:50.199213 | orchestrator | 19:16:50.199 STDOUT terraform:  } 2025-06-02 19:16:50.199242 | orchestrator | 19:16:50.199 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.199280 | orchestrator | 19:16:50.199 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:50.199286 | orchestrator | 19:16:50.199 STDOUT terraform:  } 2025-06-02 19:16:50.199318 | orchestrator | 19:16:50.199 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:50.199325 | orchestrator | 19:16:50.199 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:50.199357 | orchestrator | 19:16:50.199 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-02 19:16:50.199391 | orchestrator | 19:16:50.199 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:50.199397 | orchestrator | 19:16:50.199 STDOUT terraform:  } 2025-06-02 19:16:50.199407 | orchestrator | 19:16:50.199 STDOUT terraform:  } 2025-06-02 19:16:50.199463 | orchestrator | 19:16:50.199 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-02 19:16:50.199515 | orchestrator | 19:16:50.199 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:50.199543 | orchestrator | 19:16:50.199 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:50.199586 | orchestrator | 19:16:50.199 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:50.199621 | orchestrator | 19:16:50.199 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:50.199662 | orchestrator | 19:16:50.199 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.199699 | orchestrator | 19:16:50.199 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:50.199747 | orchestrator | 19:16:50.199 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:50.199801 | orchestrator | 19:16:50.199 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:50.199844 | orchestrator | 19:16:50.199 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:50.199884 | orchestrator | 19:16:50.199 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.199919 | orchestrator | 19:16:50.199 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:50.199958 | orchestrator | 19:16:50.199 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:50.199997 | orchestrator | 19:16:50.199 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:50.200036 | orchestrator | 19:16:50.199 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:50.200077 | orchestrator | 19:16:50.200 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.200114 | orchestrator | 19:16:50.200 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:50.200154 | orchestrator | 19:16:50.200 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.200163 | orchestrator | 19:16:50.200 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.200208 | orchestrator | 19:16:50.200 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:50.200214 | orchestrator | 19:16:50.200 STDOUT terraform:  } 2025-06-02 19:16:50.200235 | orchestrator | 19:16:50.200 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.200269 | orchestrator | 19:16:50.200 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:50.200275 | orchestrator | 19:16:50.200 STDOUT terraform:  } 2025-06-02 19:16:50.200301 | orchestrator | 19:16:50.200 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.200332 | orchestrator | 19:16:50.200 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:50.200339 | orchestrator | 19:16:50.200 STDOUT terraform:  } 2025-06-02 19:16:50.200366 | orchestrator | 19:16:50.200 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.200399 | orchestrator | 19:16:50.200 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:50.200410 | orchestrator | 19:16:50.200 STDOUT terraform:  } 2025-06-02 19:16:50.200439 | orchestrator | 19:16:50.200 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:50.200446 | orchestrator | 19:16:50.200 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:50.200479 | orchestrator | 19:16:50.200 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-02 19:16:50.200513 | orchestrator | 19:16:50.200 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:50.200518 | orchestrator | 19:16:50.200 STDOUT terraform:  } 2025-06-02 19:16:50.200524 | orchestrator | 19:16:50.200 STDOUT terraform:  } 2025-06-02 19:16:50.200581 | orchestrator | 19:16:50.200 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-02 19:16:50.200632 | orchestrator | 19:16:50.200 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:50.200671 | orchestrator | 19:16:50.200 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:50.200708 | orchestrator | 19:16:50.200 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:50.200768 | orchestrator | 19:16:50.200 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:50.200808 | orchestrator | 19:16:50.200 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.200848 | orchestrator | 19:16:50.200 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:50.200887 | orchestrator | 19:16:50.200 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:50.200926 | orchestrator | 19:16:50.200 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:50.200965 | orchestrator | 19:16:50.200 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:50.201006 | orchestrator | 19:16:50.200 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.201048 | orchestrator | 19:16:50.200 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:50.201086 | orchestrator | 19:16:50.201 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:50.201122 | orchestrator | 19:16:50.201 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:50.201161 | orchestrator | 19:16:50.201 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:50.201200 | orchestrator | 19:16:50.201 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.201238 | orchestrator | 19:16:50.201 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:50.201280 | orchestrator | 19:16:50.201 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.201287 | orchestrator | 19:16:50.201 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.201327 | orchestrator | 19:16:50.201 STDOUT terraform:  + ip_address = "192.168.112.0/2 2025-06-02 19:16:50.201381 | orchestrator | 19:16:50.201 STDOUT terraform: 0" 2025-06-02 19:16:50.201388 | orchestrator | 19:16:50.201 STDOUT terraform:  } 2025-06-02 19:16:50.201414 | orchestrator | 19:16:50.201 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.201448 | orchestrator | 19:16:50.201 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:50.201455 | orchestrator | 19:16:50.201 STDOUT terraform:  } 2025-06-02 19:16:50.201476 | orchestrator | 19:16:50.201 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.201510 | orchestrator | 19:16:50.201 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:50.201515 | orchestrator | 19:16:50.201 STDOUT terraform:  } 2025-06-02 19:16:50.201537 | orchestrator | 19:16:50.201 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.201567 | orchestrator | 19:16:50.201 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:50.201575 | orchestrator | 19:16:50.201 STDOUT terraform:  } 2025-06-02 19:16:50.201604 | orchestrator | 19:16:50.201 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:50.201611 | orchestrator | 19:16:50.201 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:50.201643 | orchestrator | 19:16:50.201 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-02 19:16:50.201674 | orchestrator | 19:16:50.201 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:50.201681 | orchestrator | 19:16:50.201 STDOUT terraform:  } 2025-06-02 19:16:50.201686 | orchestrator | 19:16:50.201 STDOUT terraform:  } 2025-06-02 19:16:50.201752 | orchestrator | 19:16:50.201 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-02 19:16:50.201821 | orchestrator | 19:16:50.201 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:50.201862 | orchestrator | 19:16:50.201 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:50.201900 | orchestrator | 19:16:50.201 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:50.201938 | orchestrator | 19:16:50.201 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:50.201997 | orchestrator | 19:16:50.201 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.202004 | orchestrator | 19:16:50.201 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:50.202065 | orchestrator | 19:16:50.201 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:50.202101 | orchestrator | 19:16:50.202 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:50.202144 | orchestrator | 19:16:50.202 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:50.202181 | orchestrator | 19:16:50.202 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.202219 | orchestrator | 19:16:50.202 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:50.202256 | orchestrator | 19:16:50.202 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:50.202295 | orchestrator | 19:16:50.202 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:50.202332 | orchestrator | 19:16:50.202 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:50.202375 | orchestrator | 19:16:50.202 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.202409 | orchestrator | 19:16:50.202 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:50.202447 | orchestrator | 19:16:50.202 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.202469 | orchestrator | 19:16:50.202 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.202500 | orchestrator | 19:16:50.202 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:50.202507 | orchestrator | 19:16:50.202 STDOUT terraform:  } 2025-06-02 19:16:50.202535 | orchestrator | 19:16:50.202 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.202565 | orchestrator | 19:16:50.202 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:50.202572 | orchestrator | 19:16:50.202 STDOUT terraform:  } 2025-06-02 19:16:50.202596 | orchestrator | 19:16:50.202 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.202626 | orchestrator | 19:16:50.202 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:50.202633 | orchestrator | 19:16:50.202 STDOUT terraform:  } 2025-06-02 19:16:50.202663 | orchestrator | 19:16:50.202 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.202696 | orchestrator | 19:16:50.202 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:50.202704 | orchestrator | 19:16:50.202 STDOUT terraform:  } 2025-06-02 19:16:50.202734 | orchestrator | 19:16:50.202 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:50.202753 | orchestrator | 19:16:50.202 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:50.202787 | orchestrator | 19:16:50.202 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-02 19:16:50.202842 | orchestrator | 19:16:50.202 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:50.202848 | orchestrator | 19:16:50.202 STDOUT terraform:  } 2025-06-02 19:16:50.202852 | orchestrator | 19:16:50.202 STDOUT terraform:  } 2025-06-02 19:16:50.202888 | orchestrator | 19:16:50.202 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-02 19:16:50.202936 | orchestrator | 19:16:50.202 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:50.202976 | orchestrator | 19:16:50.202 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:50.203014 | orchestrator | 19:16:50.202 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:50.203050 | orchestrator | 19:16:50.203 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:50.203087 | orchestrator | 19:16:50.203 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.203126 | orchestrator | 19:16:50.203 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:50.203163 | orchestrator | 19:16:50.203 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:50.203200 | orchestrator | 19:16:50.203 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:50.203240 | orchestrator | 19:16:50.203 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:50.203279 | orchestrator | 19:16:50.203 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.203318 | orchestrator | 19:16:50.203 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:50.203355 | orchestrator | 19:16:50.203 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:50.203391 | orchestrator | 19:16:50.203 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:50.203428 | orchestrator | 19:16:50.203 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:50.203468 | orchestrator | 19:16:50.203 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.203506 | orchestrator | 19:16:50.203 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:50.203545 | orchestrator | 19:16:50.203 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.203556 | orchestrator | 19:16:50.203 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.203600 | orchestrator | 19:16:50.203 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:50.203608 | orchestrator | 19:16:50.203 STDOUT terraform:  } 2025-06-02 19:16:50.203640 | orchestrator | 19:16:50.203 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.203678 | orchestrator | 19:16:50.203 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:50.203684 | orchestrator | 19:16:50.203 STDOUT terraform:  } 2025-06-02 19:16:50.203711 | orchestrator | 19:16:50.203 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.203762 | orchestrator | 19:16:50.203 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:50.203769 | orchestrator | 19:16:50.203 STDOUT terraform:  } 2025-06-02 19:16:50.203775 | orchestrator | 19:16:50.203 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:50.203810 | orchestrator | 19:16:50.203 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:50.203818 | orchestrator | 19:16:50.203 STDOUT terraform:  } 2025-06-02 19:16:50.203849 | orchestrator | 19:16:50.203 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:50.203857 | orchestrator | 19:16:50.203 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:50.203888 | orchestrator | 19:16:50.203 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-02 19:16:50.203920 | orchestrator | 19:16:50.203 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:50.203926 | orchestrator | 19:16:50.203 STDOUT terraform:  } 2025-06-02 19:16:50.203932 | orchestrator | 19:16:50.203 STDOUT terraform:  } 2025-06-02 19:16:50.203991 | orchestrator | 19:16:50.203 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-02 19:16:50.204044 | orchestrator | 19:16:50.203 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-02 19:16:50.204052 | orchestrator | 19:16:50.204 STDOUT terraform:  + force_destroy = false 2025-06-02 19:16:50.204091 | orchestrator | 19:16:50.204 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.204123 | orchestrator | 19:16:50.204 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 19:16:50.204157 | orchestrator | 19:16:50.204 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.204180 | orchestrator | 19:16:50.204 STDOUT terraform:  + router_id = (known after apply) 2025-06-02 19:16:50.204215 | orchestrator | 19:16:50.204 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:50.204221 | orchestrator | 19:16:50.204 STDOUT terraform:  } 2025-06-02 19:16:50.204265 | orchestrator | 19:16:50.204 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-02 19:16:50.204303 | orchestrator | 19:16:50.204 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-02 19:16:50.204341 | orchestrator | 19:16:50.204 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:50.204378 | orchestrator | 19:16:50.204 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.204409 | orchestrator | 19:16:50.204 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 19:16:50.204415 | orchestrator | 19:16:50.204 STDOUT terraform:  + "nova", 2025-06-02 19:16:50.204424 | orchestrator | 19:16:50.204 STDOUT terraform:  ] 2025-06-02 19:16:50.204470 | orchestrator | 19:16:50.204 STDOUT terraform:  + distributed = (known after apply) 2025-06-02 19:16:50.204510 | orchestrator | 19:16:50.204 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-02 19:16:50.204560 | orchestrator | 19:16:50.204 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-02 19:16:50.204600 | orchestrator | 19:16:50.204 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.204631 | orchestrator | 19:16:50.204 STDOUT terraform:  + name = "testbed" 2025-06-02 19:16:50.204672 | orchestrator | 19:16:50.204 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.204709 | orchestrator | 19:16:50.204 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.204749 | orchestrator | 19:16:50.204 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-02 19:16:50.204784 | orchestrator | 19:16:50.204 STDOUT terraform:  } 2025-06-02 19:16:50.204851 | orchestrator | 19:16:50.204 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-02 19:16:50.204913 | orchestrator | 19:16:50.204 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-02 19:16:50.204933 | orchestrator | 19:16:50.204 STDOUT terraform:  + description = "ssh" 2025-06-02 19:16:50.204954 | orchestrator | 19:16:50.204 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:50.204976 | orchestrator | 19:16:50.204 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:50.205009 | orchestrator | 19:16:50.204 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.205018 | orchestrator | 19:16:50.204 STDOUT terraform:  + port_range_max = 22 2025-06-02 19:16:50.205050 | orchestrator | 19:16:50.205 STDOUT terraform:  + port_range_min = 22 2025-06-02 19:16:50.205070 | orchestrator | 19:16:50.205 STDOUT terraform:  + protocol = "tcp" 2025-06-02 19:16:50.205104 | orchestrator | 19:16:50.205 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.205136 | orchestrator | 19:16:50.205 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:50.205149 | orchestrator | 19:16:50.205 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:50.205191 | orchestrator | 19:16:50.205 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:50.205227 | orchestrator | 19:16:50.205 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.205233 | orchestrator | 19:16:50.205 STDOUT terraform:  } 2025-06-02 19:16:50.205293 | orchestrator | 19:16:50.205 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-02 19:16:50.205350 | orchestrator | 19:16:50.205 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-02 19:16:50.205373 | orchestrator | 19:16:50.205 STDOUT terraform:  + description = "wireguard" 2025-06-02 19:16:50.205393 | orchestrator | 19:16:50.205 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:50.205413 | orchestrator | 19:16:50.205 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:50.205447 | orchestrator | 19:16:50.205 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.205454 | orchestrator | 19:16:50.205 STDOUT terraform:  + port_range_max = 51820 2025-06-02 19:16:50.205486 | orchestrator | 19:16:50.205 STDOUT terraform:  + port_range_min = 51820 2025-06-02 19:16:50.205494 | orchestrator | 19:16:50.205 STDOUT terraform:  + protocol = "udp" 2025-06-02 19:16:50.205536 | orchestrator | 19:16:50.205 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.205569 | orchestrator | 19:16:50.205 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:50.205589 | orchestrator | 19:16:50.205 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:50.205622 | orchestrator | 19:16:50.205 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:50.205657 | orchestrator | 19:16:50.205 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.205665 | orchestrator | 19:16:50.205 STDOUT terraform:  } 2025-06-02 19:16:50.205720 | orchestrator | 19:16:50.205 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-02 19:16:50.205785 | orchestrator | 19:16:50.205 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-02 19:16:50.205806 | orchestrator | 19:16:50.205 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:50.205814 | orchestrator | 19:16:50.205 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:50.205860 | orchestrator | 19:16:50.205 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.205868 | orchestrator | 19:16:50.205 STDOUT terraform:  + protocol = "tcp" 2025-06-02 19:16:50.205906 | orchestrator | 19:16:50.205 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.205939 | orchestrator | 19:16:50.205 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:50.205969 | orchestrator | 19:16:50.205 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 19:16:50.206001 | orchestrator | 19:16:50.205 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:50.206052 | orchestrator | 19:16:50.205 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.206067 | orchestrator | 19:16:50.206 STDOUT terraform:  } 2025-06-02 19:16:50.206119 | orchestrator | 19:16:50.206 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-02 19:16:50.206173 | orchestrator | 19:16:50.206 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-02 19:16:50.206196 | orchestrator | 19:16:50.206 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:50.206217 | orchestrator | 19:16:50.206 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:50.206253 | orchestrator | 19:16:50.206 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.206261 | orchestrator | 19:16:50.206 STDOUT terraform:  + protocol = "udp" 2025-06-02 19:16:50.206303 | orchestrator | 19:16:50.206 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.206334 | orchestrator | 19:16:50.206 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:50.206366 | orchestrator | 19:16:50.206 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 19:16:50.206398 | orchestrator | 19:16:50.206 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:50.206431 | orchestrator | 19:16:50.206 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.206439 | orchestrator | 19:16:50.206 STDOUT terraform:  } 2025-06-02 19:16:50.206498 | orchestrator | 19:16:50.206 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-02 19:16:50.206552 | orchestrator | 19:16:50.206 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-02 19:16:50.206575 | orchestrator | 19:16:50.206 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:50.206582 | orchestrator | 19:16:50.206 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:50.206625 | orchestrator | 19:16:50.206 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.206633 | orchestrator | 19:16:50.206 STDOUT terraform:  + protocol = "icmp" 2025-06-02 19:16:50.206677 | orchestrator | 19:16:50.206 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.206707 | orchestrator | 19:16:50.206 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:50.206729 | orchestrator | 19:16:50.206 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:50.206769 | orchestrator | 19:16:50.206 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:50.206803 | orchestrator | 19:16:50.206 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.206808 | orchestrator | 19:16:50.206 STDOUT terraform:  } 2025-06-02 19:16:50.206865 | orchestrator | 19:16:50.206 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-02 19:16:50.206915 | orchestrator | 19:16:50.206 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-02 19:16:50.206937 | orchestrator | 19:16:50.206 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:50.206946 | orchestrator | 19:16:50.206 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:50.206989 | orchestrator | 19:16:50.206 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.207000 | orchestrator | 19:16:50.206 STDOUT terraform:  + protocol = "tcp" 2025-06-02 19:16:50.207039 | orchestrator | 19:16:50.206 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.207072 | orchestrator | 19:16:50.207 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:50.207099 | orchestrator | 19:16:50.207 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:50.207145 | orchestrator | 19:16:50.207 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:50.207187 | orchestrator | 19:16:50.207 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.207195 | orchestrator | 19:16:50.207 STDOUT terraform:  } 2025-06-02 19:16:50.207255 | orchestrator | 19:16:50.207 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-02 19:16:50.207309 | orchestrator | 19:16:50.207 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-02 19:16:50.207350 | orchestrator | 19:16:50.207 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:50.207357 | orchestrator | 19:16:50.207 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:50.207396 | orchestrator | 19:16:50.207 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.207417 | orchestrator | 19:16:50.207 STDOUT terraform:  + protocol = "udp" 2025-06-02 19:16:50.207463 | orchestrator | 19:16:50.207 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.207506 | orchestrator | 19:16:50.207 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:50.207535 | orchestrator | 19:16:50.207 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:50.207570 | orchestrator | 19:16:50.207 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:50.207605 | orchestrator | 19:16:50.207 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.207611 | orchestrator | 19:16:50.207 STDOUT terraform:  } 2025-06-02 19:16:50.207670 | orchestrator | 19:16:50.207 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-02 19:16:50.207721 | orchestrator | 19:16:50.207 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-02 19:16:50.207773 | orchestrator | 19:16:50.207 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:50.207779 | orchestrator | 19:16:50.207 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:50.207813 | orchestrator | 19:16:50.207 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.207820 | orchestrator | 19:16:50.207 STDOUT terraform:  + protocol = "icmp" 2025-06-02 19:16:50.207861 | orchestrator | 19:16:50.207 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.207895 | orchestrator | 19:16:50.207 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:50.207924 | orchestrator | 19:16:50.207 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:50.207951 | orchestrator | 19:16:50.207 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:50.207985 | orchestrator | 19:16:50.207 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.207991 | orchestrator | 19:16:50.207 STDOUT terraform:  } 2025-06-02 19:16:50.208049 | orchestrator | 19:16:50.207 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-02 19:16:50.208102 | orchestrator | 19:16:50.208 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-02 19:16:50.208111 | orchestrator | 19:16:50.208 STDOUT terraform:  + description = "vrrp" 2025-06-02 19:16:50.208143 | orchestrator | 19:16:50.208 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:50.208164 | orchestrator | 19:16:50.208 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:50.208201 | orchestrator | 19:16:50.208 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.208212 | orchestrator | 19:16:50.208 STDOUT terraform:  + protocol = "112" 2025-06-02 19:16:50.208250 | orchestrator | 19:16:50.208 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.208281 | orchestrator | 19:16:50.208 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:50.208313 | orchestrator | 19:16:50.208 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:50.208344 | orchestrator | 19:16:50.208 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:50.208377 | orchestrator | 19:16:50.208 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.208384 | orchestrator | 19:16:50.208 STDOUT terraform:  } 2025-06-02 19:16:50.208440 | orchestrator | 19:16:50.208 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-02 19:16:50.208490 | orchestrator | 19:16:50.208 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-02 19:16:50.208516 | orchestrator | 19:16:50.208 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.208551 | orchestrator | 19:16:50.208 STDOUT terraform:  + description = "management security group" 2025-06-02 19:16:50.208582 | orchestrator | 19:16:50.208 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.208614 | orchestrator | 19:16:50.208 STDOUT terraform:  + name = "testbed-management" 2025-06-02 19:16:50.208646 | orchestrator | 19:16:50.208 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.208677 | orchestrator | 19:16:50.208 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 19:16:50.208706 | orchestrator | 19:16:50.208 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.208712 | orchestrator | 19:16:50.208 STDOUT terraform:  } 2025-06-02 19:16:50.208793 | orchestrator | 19:16:50.208 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-02 19:16:50.208841 | orchestrator | 19:16:50.208 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-02 19:16:50.208869 | orchestrator | 19:16:50.208 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.208903 | orchestrator | 19:16:50.208 STDOUT terraform:  + description = "node security group" 2025-06-02 19:16:50.208929 | orchestrator | 19:16:50.208 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.208949 | orchestrator | 19:16:50.208 STDOUT terraform:  + name = "testbed-node" 2025-06-02 19:16:50.208984 | orchestrator | 19:16:50.208 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.209007 | orchestrator | 19:16:50.208 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 19:16:50.209038 | orchestrator | 19:16:50.208 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.209046 | orchestrator | 19:16:50.209 STDOUT terraform:  } 2025-06-02 19:16:50.209098 | orchestrator | 19:16:50.209 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-02 19:16:50.209143 | orchestrator | 19:16:50.209 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-02 19:16:50.209176 | orchestrator | 19:16:50.209 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:50.209211 | orchestrator | 19:16:50.209 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-02 19:16:50.209220 | orchestrator | 19:16:50.209 STDOUT terraform:  + dns_nameservers = [ 2025-06-02 19:16:50.209248 | orchestrator | 19:16:50.209 STDOUT terraform:  + "8.8.8.8", 2025-06-02 19:16:50.209256 | orchestrator | 19:16:50.209 STDOUT terraform:  + "9.9.9.9", 2025-06-02 19:16:50.209262 | orchestrator | 19:16:50.209 STDOUT terraform:  ] 2025-06-02 19:16:50.209292 | orchestrator | 19:16:50.209 STDOUT terraform:  + enable_dhcp = true 2025-06-02 19:16:50.209325 | orchestrator | 19:16:50.209 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-02 19:16:50.209362 | orchestrator | 19:16:50.209 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.209370 | orchestrator | 19:16:50.209 STDOUT terraform:  + ip_version = 4 2025-06-02 19:16:50.209407 | orchestrator | 19:16:50.209 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-02 19:16:50.209440 | orchestrator | 19:16:50.209 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-02 19:16:50.209480 | orchestrator | 19:16:50.209 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-02 19:16:50.209511 | orchestrator | 19:16:50.209 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:50.209520 | orchestrator | 19:16:50.209 STDOUT terraform:  + no_gateway = false 2025-06-02 19:16:50.209559 | orchestrator | 19:16:50.209 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:50.209590 | orchestrator | 19:16:50.209 STDOUT terraform:  + service_types = (known after apply) 2025-06-02 19:16:50.209625 | orchestrator | 19:16:50.209 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:50.209633 | orchestrator | 19:16:50.209 STDOUT terraform:  + allocation_pool { 2025-06-02 19:16:50.209668 | orchestrator | 19:16:50.209 STDOUT terraform:  + end = "192.168.31.250" 2025-06-02 19:16:50.209691 | orchestrator | 19:16:50.209 STDOUT terraform:  + start = "192.168.31.200" 2025-06-02 19:16:50.209697 | orchestrator | 19:16:50.209 STDOUT terraform:  } 2025-06-02 19:16:50.209703 | orchestrator | 19:16:50.209 STDOUT terraform:  } 2025-06-02 19:16:50.209736 | orchestrator | 19:16:50.209 STDOUT terraform:  # terraform_data.image will be created 2025-06-02 19:16:50.209800 | orchestrator | 19:16:50.209 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-02 19:16:50.209808 | orchestrator | 19:16:50.209 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.209812 | orchestrator | 19:16:50.209 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 19:16:50.209834 | orchestrator | 19:16:50.209 STDOUT terraform:  + output = (known after apply) 2025-06-02 19:16:50.209839 | orchestrator | 19:16:50.209 STDOUT terraform:  } 2025-06-02 19:16:50.209874 | orchestrator | 19:16:50.209 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-02 19:16:50.209904 | orchestrator | 19:16:50.209 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-02 19:16:50.209938 | orchestrator | 19:16:50.209 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:50.209945 | orchestrator | 19:16:50.209 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 19:16:50.209977 | orchestrator | 19:16:50.209 STDOUT terraform:  + output = (known after apply) 2025-06-02 19:16:50.209983 | orchestrator | 19:16:50.209 STDOUT terraform:  } 2025-06-02 19:16:50.210027 | orchestrator | 19:16:50.209 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-02 19:16:50.210036 | orchestrator | 19:16:50.210 STDOUT terraform: Changes to Outputs: 2025-06-02 19:16:50.210069 | orchestrator | 19:16:50.210 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-02 19:16:50.210095 | orchestrator | 19:16:50.210 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 19:16:50.210699 | orchestrator | 19:16:50.210 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-02 19:16:50.212970 | orchestrator | 19:16:50.211 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=aaefadf8-bc57-9289-f8bc-88678b41e783] 2025-06-02 19:16:50.344128 | orchestrator | 19:16:50.343 STDOUT terraform: terraform_data.image: Creating... 2025-06-02 19:16:50.436836 | orchestrator | 19:16:50.436 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=dfd17747-e241-945a-8e4e-fc39c2013019] 2025-06-02 19:16:50.452096 | orchestrator | 19:16:50.451 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-02 19:16:50.456453 | orchestrator | 19:16:50.456 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-02 19:16:50.461332 | orchestrator | 19:16:50.461 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-02 19:16:50.467144 | orchestrator | 19:16:50.467 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-02 19:16:50.471485 | orchestrator | 19:16:50.471 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-02 19:16:50.471865 | orchestrator | 19:16:50.471 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-02 19:16:50.473461 | orchestrator | 19:16:50.473 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-02 19:16:50.475645 | orchestrator | 19:16:50.475 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-02 19:16:50.478170 | orchestrator | 19:16:50.478 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-02 19:16:50.480636 | orchestrator | 19:16:50.480 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-02 19:16:50.912958 | orchestrator | 19:16:50.912 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 19:16:50.918870 | orchestrator | 19:16:50.918 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 19:16:50.923684 | orchestrator | 19:16:50.923 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-02 19:16:50.925890 | orchestrator | 19:16:50.925 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-02 19:16:50.967619 | orchestrator | 19:16:50.967 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-02 19:16:50.977904 | orchestrator | 19:16:50.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-02 19:16:56.483108 | orchestrator | 19:16:56.482 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=a7e8b0e0-68f2-42cd-9762-41bd26a4a1e1] 2025-06-02 19:16:56.494625 | orchestrator | 19:16:56.494 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-02 19:17:00.469450 | orchestrator | 19:17:00.469 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-02 19:17:00.472617 | orchestrator | 19:17:00.472 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-02 19:17:00.473024 | orchestrator | 19:17:00.472 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-02 19:17:00.473720 | orchestrator | 19:17:00.473 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-02 19:17:00.477087 | orchestrator | 19:17:00.476 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-02 19:17:00.481530 | orchestrator | 19:17:00.481 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-02 19:17:00.924399 | orchestrator | 19:17:00.924 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-02 19:17:00.926699 | orchestrator | 19:17:00.926 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-02 19:17:00.979247 | orchestrator | 19:17:00.978 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-02 19:17:01.067190 | orchestrator | 19:17:01.066 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=9ffd9bf2-84a3-4d27-b5f3-3356e7749f76] 2025-06-02 19:17:01.079264 | orchestrator | 19:17:01.079 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=79afc6c6-58f6-4307-87e0-09bd0d860ce4] 2025-06-02 19:17:01.084552 | orchestrator | 19:17:01.084 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-02 19:17:01.085221 | orchestrator | 19:17:01.085 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-02 19:17:01.085548 | orchestrator | 19:17:01.085 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=5963bf14-863c-43c8-92fe-9d0d39c425c6] 2025-06-02 19:17:01.089595 | orchestrator | 19:17:01.089 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=67fcb81d-853f-45f3-94a3-23b2668aa3db] 2025-06-02 19:17:01.093270 | orchestrator | 19:17:01.093 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-02 19:17:01.095056 | orchestrator | 19:17:01.094 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-02 19:17:01.110733 | orchestrator | 19:17:01.110 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=5887df38-b3fa-4a4d-abd1-7bd86d74143f] 2025-06-02 19:17:01.120772 | orchestrator | 19:17:01.120 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-02 19:17:01.132204 | orchestrator | 19:17:01.131 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=3a656ee4-c3af-49b4-a6f0-0feb15d5e250] 2025-06-02 19:17:01.138110 | orchestrator | 19:17:01.137 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-02 19:17:01.169902 | orchestrator | 19:17:01.169 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=0cd9bba3-eceb-4382-8287-3e8628ac0773] 2025-06-02 19:17:01.185729 | orchestrator | 19:17:01.185 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=117bc598-c43f-4136-b957-2f363a6b8335] 2025-06-02 19:17:01.187809 | orchestrator | 19:17:01.187 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-02 19:17:01.198209 | orchestrator | 19:17:01.196 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=05600669-f5a9-4eeb-abdf-0ca8c213e696] 2025-06-02 19:17:01.198318 | orchestrator | 19:17:01.197 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=211b1e6d2366dbf5dbc82cd2afc38822fabb6a82] 2025-06-02 19:17:01.199444 | orchestrator | 19:17:01.199 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-02 19:17:01.203880 | orchestrator | 19:17:01.203 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=7c7630da20db0af8897eaa9e5c1ba820f45d2146] 2025-06-02 19:17:01.205818 | orchestrator | 19:17:01.205 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-02 19:17:06.497246 | orchestrator | 19:17:06.496 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-02 19:17:06.924735 | orchestrator | 19:17:06.924 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=5b521f65-f8fa-490f-8a49-c7f8940b6af3] 2025-06-02 19:17:07.031516 | orchestrator | 19:17:07.031 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=79ce4337-ce98-42fd-8332-d224c705f876] 2025-06-02 19:17:07.042844 | orchestrator | 19:17:07.042 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-02 19:17:11.086045 | orchestrator | 19:17:11.085 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-02 19:17:11.086353 | orchestrator | 19:17:11.086 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-02 19:17:11.094139 | orchestrator | 19:17:11.093 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 19:17:11.096386 | orchestrator | 19:17:11.096 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-02 19:17:11.121711 | orchestrator | 19:17:11.121 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-02 19:17:11.139159 | orchestrator | 19:17:11.138 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 19:17:11.452389 | orchestrator | 19:17:11.452 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=794ae5fd-3701-41a4-bdcf-eea74a87ef71] 2025-06-02 19:17:11.470213 | orchestrator | 19:17:11.469 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=2a98dc1a-5ef0-44e2-89ee-a4db820b5c80] 2025-06-02 19:17:11.507092 | orchestrator | 19:17:11.506 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=43f457e2-7039-41c2-9765-8ce3083f4c01] 2025-06-02 19:17:11.507280 | orchestrator | 19:17:11.507 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=3109e32e-09f6-49b1-9102-762fc3bfff6d] 2025-06-02 19:17:11.538907 | orchestrator | 19:17:11.538 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=b2f5959e-2671-4f54-a7a6-8d966ef42c9c] 2025-06-02 19:17:12.024539 | orchestrator | 19:17:12.024 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=712c0a2f-f072-4fe5-8606-72d3e6b109d2] 2025-06-02 19:17:14.713622 | orchestrator | 19:17:14.713 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=1f30303f-6442-4558-9563-d95931755581] 2025-06-02 19:17:14.719831 | orchestrator | 19:17:14.719 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-02 19:17:14.720061 | orchestrator | 19:17:14.719 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-02 19:17:14.724162 | orchestrator | 19:17:14.724 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-02 19:17:14.914811 | orchestrator | 19:17:14.907 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=0006f01f-3585-44bf-beee-9dc2523a3688] 2025-06-02 19:17:14.914894 | orchestrator | 19:17:14.910 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f864e297-2b7e-4552-ac0b-022d4009920d] 2025-06-02 19:17:14.917293 | orchestrator | 19:17:14.917 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-02 19:17:14.923119 | orchestrator | 19:17:14.922 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-02 19:17:14.923329 | orchestrator | 19:17:14.923 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-02 19:17:14.924669 | orchestrator | 19:17:14.924 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-02 19:17:14.927264 | orchestrator | 19:17:14.927 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-02 19:17:14.928553 | orchestrator | 19:17:14.928 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-02 19:17:14.929534 | orchestrator | 19:17:14.929 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-02 19:17:14.932572 | orchestrator | 19:17:14.932 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-02 19:17:14.936162 | orchestrator | 19:17:14.935 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-02 19:17:15.080302 | orchestrator | 19:17:15.079 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=378fd7fa-fbae-4f56-9034-f945775ad7b9] 2025-06-02 19:17:15.095249 | orchestrator | 19:17:15.095 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-02 19:17:15.228245 | orchestrator | 19:17:15.227 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=dadbd4da-7662-448b-b0ba-6c9fafa66b74] 2025-06-02 19:17:15.244316 | orchestrator | 19:17:15.244 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-02 19:17:15.256491 | orchestrator | 19:17:15.256 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=2f6f8561-8af2-4b4e-af62-3d641e4d267e] 2025-06-02 19:17:15.269955 | orchestrator | 19:17:15.269 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-02 19:17:15.495089 | orchestrator | 19:17:15.494 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=8463817d-8929-4216-a22b-4dfedffc3e8f] 2025-06-02 19:17:15.510808 | orchestrator | 19:17:15.510 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-02 19:17:15.648810 | orchestrator | 19:17:15.648 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=2c765638-9fbb-4ba7-984e-aa10a3b6729b] 2025-06-02 19:17:15.667963 | orchestrator | 19:17:15.667 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-02 19:17:15.809116 | orchestrator | 19:17:15.808 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=480bd211-597e-442c-95c4-5247c1b0e255] 2025-06-02 19:17:15.816798 | orchestrator | 19:17:15.816 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-02 19:17:15.932098 | orchestrator | 19:17:15.931 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=662bc258-c18d-49e5-a5b4-e1c174201a76] 2025-06-02 19:17:15.939033 | orchestrator | 19:17:15.938 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-02 19:17:16.081235 | orchestrator | 19:17:16.080 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=78658ec0-7d67-488a-a4ae-ec4dee35de25] 2025-06-02 19:17:16.227280 | orchestrator | 19:17:16.226 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=97c27010-bf74-439c-a45e-a1b7298e140b] 2025-06-02 19:17:20.681622 | orchestrator | 19:17:20.681 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=559fa97b-36fe-4453-8413-e5557202667e] 2025-06-02 19:17:20.705399 | orchestrator | 19:17:20.704 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=e1ee4ce7-1ca5-40f9-bd86-dd0b8876a1ef] 2025-06-02 19:17:20.801110 | orchestrator | 19:17:20.800 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=31095d61-b4c1-4659-b5df-164622dbc481] 2025-06-02 19:17:20.880757 | orchestrator | 19:17:20.880 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=a87cf084-c445-4a5a-9c5c-3ef3b1bc1d7d] 2025-06-02 19:17:20.909344 | orchestrator | 19:17:20.909 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=fe3ec940-dd48-42fd-a6d7-51490e43ea18] 2025-06-02 19:17:21.324098 | orchestrator | 19:17:21.323 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=c5ba1c70-6b76-46fd-8f89-a032351c9724] 2025-06-02 19:17:21.427956 | orchestrator | 19:17:21.427 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=cff32b77-8cc8-44c9-a519-2bd0dbad8ad1] 2025-06-02 19:17:21.869959 | orchestrator | 19:17:21.869 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=f4a39b69-ecf7-4940-9ed0-135860a21a42] 2025-06-02 19:17:21.897556 | orchestrator | 19:17:21.897 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-02 19:17:21.906882 | orchestrator | 19:17:21.906 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-02 19:17:21.911235 | orchestrator | 19:17:21.911 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-02 19:17:21.913803 | orchestrator | 19:17:21.913 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-02 19:17:21.926718 | orchestrator | 19:17:21.917 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-02 19:17:21.926793 | orchestrator | 19:17:21.924 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-02 19:17:21.933781 | orchestrator | 19:17:21.933 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-02 19:17:28.807255 | orchestrator | 19:17:28.806 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=b07a5a7b-c1c4-484d-8272-bb43a244b0ca] 2025-06-02 19:17:28.817150 | orchestrator | 19:17:28.816 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-02 19:17:28.823989 | orchestrator | 19:17:28.823 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-02 19:17:28.825955 | orchestrator | 19:17:28.825 STDOUT terraform: local_file.inventory: Creating... 2025-06-02 19:17:28.832465 | orchestrator | 19:17:28.832 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=57db01c4c43b7ac6548e11ab5dc029e491594aed] 2025-06-02 19:17:28.832996 | orchestrator | 19:17:28.832 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=cccce069542cc08c3d134ac5278e8b10c9f2b75f] 2025-06-02 19:17:29.604057 | orchestrator | 19:17:29.603 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=b07a5a7b-c1c4-484d-8272-bb43a244b0ca] 2025-06-02 19:17:31.910784 | orchestrator | 19:17:31.910 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-02 19:17:31.912862 | orchestrator | 19:17:31.912 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-02 19:17:31.915990 | orchestrator | 19:17:31.915 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-02 19:17:31.926311 | orchestrator | 19:17:31.926 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-02 19:17:31.928587 | orchestrator | 19:17:31.928 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-02 19:17:31.934905 | orchestrator | 19:17:31.934 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-02 19:17:41.911728 | orchestrator | 19:17:41.911 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-02 19:17:41.913617 | orchestrator | 19:17:41.913 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-02 19:17:41.916765 | orchestrator | 19:17:41.916 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-02 19:17:41.927126 | orchestrator | 19:17:41.926 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-02 19:17:41.929259 | orchestrator | 19:17:41.929 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-02 19:17:41.935473 | orchestrator | 19:17:41.935 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-02 19:17:51.911943 | orchestrator | 19:17:51.911 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-06-02 19:17:51.914480 | orchestrator | 19:17:51.914 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-02 19:17:51.917642 | orchestrator | 19:17:51.917 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-02 19:17:51.928009 | orchestrator | 19:17:51.927 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-06-02 19:17:51.930199 | orchestrator | 19:17:51.929 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-02 19:17:51.936388 | orchestrator | 19:17:51.936 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-02 19:17:52.460098 | orchestrator | 19:17:52.459 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=239c33b5-cf5f-46a7-958d-75217cd05e6c] 2025-06-02 19:17:52.517793 | orchestrator | 19:17:52.517 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=137a3a32-3de9-41ea-8767-64470d3f3beb] 2025-06-02 19:18:01.912091 | orchestrator | 19:18:01.911 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-06-02 19:18:01.918512 | orchestrator | 19:18:01.918 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-06-02 19:18:01.928996 | orchestrator | 19:18:01.928 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-06-02 19:18:01.931225 | orchestrator | 19:18:01.931 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-06-02 19:18:02.837587 | orchestrator | 19:18:02.837 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=d5d9acfb-6fd3-4620-a743-4405c69cc209] 2025-06-02 19:18:02.940532 | orchestrator | 19:18:02.940 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=49a9fc15-2db2-4445-b330-20ec49727972] 2025-06-02 19:18:02.946643 | orchestrator | 19:18:02.946 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=5998644f-1b32-4d9c-847c-8da431da2553] 2025-06-02 19:18:03.008809 | orchestrator | 19:18:03.008 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=e517d667-63ee-487c-a551-04e321ff1229] 2025-06-02 19:18:03.022578 | orchestrator | 19:18:03.022 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-02 19:18:03.038070 | orchestrator | 19:18:03.037 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=7554899336875640285] 2025-06-02 19:18:03.039343 | orchestrator | 19:18:03.039 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-02 19:18:03.039491 | orchestrator | 19:18:03.039 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-02 19:18:03.040511 | orchestrator | 19:18:03.040 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-02 19:18:03.054228 | orchestrator | 19:18:03.054 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-02 19:18:03.059515 | orchestrator | 19:18:03.059 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-02 19:18:03.060851 | orchestrator | 19:18:03.060 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-02 19:18:03.060897 | orchestrator | 19:18:03.060 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-02 19:18:03.064825 | orchestrator | 19:18:03.064 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-02 19:18:03.067082 | orchestrator | 19:18:03.066 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-02 19:18:03.067118 | orchestrator | 19:18:03.067 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-02 19:18:08.742567 | orchestrator | 19:18:08.742 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=d5d9acfb-6fd3-4620-a743-4405c69cc209/5887df38-b3fa-4a4d-abd1-7bd86d74143f] 2025-06-02 19:18:08.748613 | orchestrator | 19:18:08.748 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=5998644f-1b32-4d9c-847c-8da431da2553/67fcb81d-853f-45f3-94a3-23b2668aa3db] 2025-06-02 19:18:08.769060 | orchestrator | 19:18:08.768 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=d5d9acfb-6fd3-4620-a743-4405c69cc209/5963bf14-863c-43c8-92fe-9d0d39c425c6] 2025-06-02 19:18:08.779826 | orchestrator | 19:18:08.779 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=49a9fc15-2db2-4445-b330-20ec49727972/117bc598-c43f-4136-b957-2f363a6b8335] 2025-06-02 19:18:08.806241 | orchestrator | 19:18:08.805 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=5998644f-1b32-4d9c-847c-8da431da2553/79afc6c6-58f6-4307-87e0-09bd0d860ce4] 2025-06-02 19:18:08.823427 | orchestrator | 19:18:08.822 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=d5d9acfb-6fd3-4620-a743-4405c69cc209/9ffd9bf2-84a3-4d27-b5f3-3356e7749f76] 2025-06-02 19:18:08.835702 | orchestrator | 19:18:08.835 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=49a9fc15-2db2-4445-b330-20ec49727972/0cd9bba3-eceb-4382-8287-3e8628ac0773] 2025-06-02 19:18:08.855068 | orchestrator | 19:18:08.854 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=5998644f-1b32-4d9c-847c-8da431da2553/05600669-f5a9-4eeb-abdf-0ca8c213e696] 2025-06-02 19:18:08.887553 | orchestrator | 19:18:08.887 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=49a9fc15-2db2-4445-b330-20ec49727972/3a656ee4-c3af-49b4-a6f0-0feb15d5e250] 2025-06-02 19:18:13.071191 | orchestrator | 19:18:13.070 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-02 19:18:23.076172 | orchestrator | 19:18:23.075 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-02 19:18:23.529933 | orchestrator | 19:18:23.529 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=2484bbbb-7f19-4483-978f-33d781152cb0] 2025-06-02 19:18:23.581495 | orchestrator | 19:18:23.581 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-02 19:18:23.581845 | orchestrator | 19:18:23.581 STDOUT terraform: Outputs: 2025-06-02 19:18:23.581877 | orchestrator | 19:18:23.581 STDOUT terraform: manager_address = 2025-06-02 19:18:23.581913 | orchestrator | 19:18:23.581 STDOUT terraform: private_key = 2025-06-02 19:18:23.760716 | orchestrator | ok: Runtime: 0:01:42.455266 2025-06-02 19:18:23.794781 | 2025-06-02 19:18:23.794944 | TASK [Create infrastructure (stable)] 2025-06-02 19:18:24.327760 | orchestrator | skipping: Conditional result was False 2025-06-02 19:18:24.344111 | 2025-06-02 19:18:24.344269 | TASK [Fetch manager address] 2025-06-02 19:18:24.809532 | orchestrator | ok 2025-06-02 19:18:24.817938 | 2025-06-02 19:18:24.818101 | TASK [Set manager_host address] 2025-06-02 19:18:24.898511 | orchestrator | ok 2025-06-02 19:18:24.910876 | 2025-06-02 19:18:24.911021 | LOOP [Update ansible collections] 2025-06-02 19:18:26.683688 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 19:18:26.684119 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 19:18:26.684221 | orchestrator | Starting galaxy collection install process 2025-06-02 19:18:26.684287 | orchestrator | Process install dependency map 2025-06-02 19:18:26.684329 | orchestrator | Starting collection install process 2025-06-02 19:18:26.684365 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-06-02 19:18:26.684410 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-06-02 19:18:26.684462 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-02 19:18:26.684542 | orchestrator | ok: Item: commons Runtime: 0:00:01.373292 2025-06-02 19:18:28.153895 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 19:18:28.154095 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 19:18:28.154131 | orchestrator | Starting galaxy collection install process 2025-06-02 19:18:28.154155 | orchestrator | Process install dependency map 2025-06-02 19:18:28.154177 | orchestrator | Starting collection install process 2025-06-02 19:18:28.154207 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-06-02 19:18:28.154229 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-06-02 19:18:28.154249 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-02 19:18:28.154282 | orchestrator | ok: Item: services Runtime: 0:00:01.221310 2025-06-02 19:18:28.173555 | 2025-06-02 19:18:28.173693 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 19:18:38.744021 | orchestrator | ok 2025-06-02 19:18:38.754132 | 2025-06-02 19:18:38.754272 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 19:19:38.794738 | orchestrator | ok 2025-06-02 19:19:38.806352 | 2025-06-02 19:19:38.806484 | TASK [Fetch manager ssh hostkey] 2025-06-02 19:19:40.383831 | orchestrator | Output suppressed because no_log was given 2025-06-02 19:19:40.399604 | 2025-06-02 19:19:40.399775 | TASK [Get ssh keypair from terraform environment] 2025-06-02 19:19:40.937180 | orchestrator | ok: Runtime: 0:00:00.009343 2025-06-02 19:19:40.951167 | 2025-06-02 19:19:40.951319 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 19:19:40.987732 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-02 19:19:40.998810 | 2025-06-02 19:19:40.998986 | TASK [Run manager part 0] 2025-06-02 19:19:42.173403 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 19:19:42.390459 | orchestrator | 2025-06-02 19:19:42.390540 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-02 19:19:42.390551 | orchestrator | 2025-06-02 19:19:42.390571 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-02 19:19:44.207002 | orchestrator | ok: [testbed-manager] 2025-06-02 19:19:44.207126 | orchestrator | 2025-06-02 19:19:44.207200 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 19:19:44.207236 | orchestrator | 2025-06-02 19:19:44.207270 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:19:46.444701 | orchestrator | ok: [testbed-manager] 2025-06-02 19:19:46.444739 | orchestrator | 2025-06-02 19:19:46.444745 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 19:19:47.125007 | orchestrator | ok: [testbed-manager] 2025-06-02 19:19:47.125065 | orchestrator | 2025-06-02 19:19:47.125076 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 19:19:47.188979 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:47.189099 | orchestrator | 2025-06-02 19:19:47.189108 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-02 19:19:47.214219 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:47.214336 | orchestrator | 2025-06-02 19:19:47.214347 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 19:19:47.237563 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:47.237598 | orchestrator | 2025-06-02 19:19:47.237603 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 19:19:47.272045 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:47.272086 | orchestrator | 2025-06-02 19:19:47.272095 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 19:19:47.302319 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:47.302354 | orchestrator | 2025-06-02 19:19:47.302360 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-02 19:19:47.341281 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:47.341331 | orchestrator | 2025-06-02 19:19:47.341341 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-02 19:19:47.383265 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:47.383305 | orchestrator | 2025-06-02 19:19:47.383313 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-02 19:19:48.176991 | orchestrator | changed: [testbed-manager] 2025-06-02 19:19:48.177067 | orchestrator | 2025-06-02 19:19:48.177084 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-02 19:22:59.871429 | orchestrator | changed: [testbed-manager] 2025-06-02 19:22:59.871503 | orchestrator | 2025-06-02 19:22:59.871521 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 19:24:18.156088 | orchestrator | changed: [testbed-manager] 2025-06-02 19:24:18.156188 | orchestrator | 2025-06-02 19:24:18.156205 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 19:24:40.042972 | orchestrator | changed: [testbed-manager] 2025-06-02 19:24:40.043069 | orchestrator | 2025-06-02 19:24:40.043089 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 19:24:48.680684 | orchestrator | changed: [testbed-manager] 2025-06-02 19:24:48.680827 | orchestrator | 2025-06-02 19:24:48.680848 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 19:24:48.729960 | orchestrator | ok: [testbed-manager] 2025-06-02 19:24:48.730083 | orchestrator | 2025-06-02 19:24:48.730103 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-02 19:24:49.512492 | orchestrator | ok: [testbed-manager] 2025-06-02 19:24:49.512536 | orchestrator | 2025-06-02 19:24:49.512546 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-02 19:24:50.240392 | orchestrator | changed: [testbed-manager] 2025-06-02 19:24:50.240431 | orchestrator | 2025-06-02 19:24:50.240440 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-02 19:24:56.642487 | orchestrator | changed: [testbed-manager] 2025-06-02 19:24:56.642585 | orchestrator | 2025-06-02 19:24:56.642629 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-02 19:25:02.701437 | orchestrator | changed: [testbed-manager] 2025-06-02 19:25:02.701530 | orchestrator | 2025-06-02 19:25:02.701551 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-02 19:25:05.365850 | orchestrator | changed: [testbed-manager] 2025-06-02 19:25:05.389187 | orchestrator | 2025-06-02 19:25:05.389232 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-02 19:25:07.125970 | orchestrator | changed: [testbed-manager] 2025-06-02 19:25:07.126081 | orchestrator | 2025-06-02 19:25:07.126098 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-02 19:25:08.309618 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 19:25:08.309691 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 19:25:08.309705 | orchestrator | 2025-06-02 19:25:08.309718 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-02 19:25:08.353943 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 19:25:08.353988 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 19:25:08.353994 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 19:25:08.353999 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 19:25:13.479547 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 19:25:13.479622 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 19:25:13.479634 | orchestrator | 2025-06-02 19:25:13.479643 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-02 19:25:14.052356 | orchestrator | changed: [testbed-manager] 2025-06-02 19:25:14.052443 | orchestrator | 2025-06-02 19:25:14.052461 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-02 19:28:36.834929 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-02 19:28:36.835048 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-02 19:28:36.835068 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-02 19:28:36.835082 | orchestrator | 2025-06-02 19:28:36.835095 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-02 19:28:39.184231 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-02 19:28:39.184319 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-02 19:28:39.184336 | orchestrator | 2025-06-02 19:28:39.184348 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-02 19:28:39.184361 | orchestrator | 2025-06-02 19:28:39.184373 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:28:40.620769 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:40.620895 | orchestrator | 2025-06-02 19:28:40.620914 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 19:28:40.667303 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:40.667362 | orchestrator | 2025-06-02 19:28:40.667371 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 19:28:40.739442 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:40.739521 | orchestrator | 2025-06-02 19:28:40.739537 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 19:28:41.507612 | orchestrator | changed: [testbed-manager] 2025-06-02 19:28:41.507693 | orchestrator | 2025-06-02 19:28:41.507709 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 19:28:42.269206 | orchestrator | changed: [testbed-manager] 2025-06-02 19:28:42.269323 | orchestrator | 2025-06-02 19:28:42.269342 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 19:28:43.650700 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-02 19:28:43.650835 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-02 19:28:43.650866 | orchestrator | 2025-06-02 19:28:43.650903 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 19:28:45.089886 | orchestrator | changed: [testbed-manager] 2025-06-02 19:28:45.089988 | orchestrator | 2025-06-02 19:28:45.090004 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 19:28:46.839104 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:28:46.839192 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-02 19:28:46.839207 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:28:46.839219 | orchestrator | 2025-06-02 19:28:46.839232 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 19:28:47.411842 | orchestrator | changed: [testbed-manager] 2025-06-02 19:28:47.412653 | orchestrator | 2025-06-02 19:28:47.412696 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 19:28:47.485600 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:28:47.485653 | orchestrator | 2025-06-02 19:28:47.485662 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 19:28:48.333195 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:28:48.333283 | orchestrator | changed: [testbed-manager] 2025-06-02 19:28:48.333301 | orchestrator | 2025-06-02 19:28:48.333314 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 19:28:48.373180 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:28:48.373256 | orchestrator | 2025-06-02 19:28:48.373269 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 19:28:48.413955 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:28:48.414043 | orchestrator | 2025-06-02 19:28:48.414058 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 19:28:48.453133 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:28:48.453211 | orchestrator | 2025-06-02 19:28:48.453228 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 19:28:48.502117 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:28:48.502193 | orchestrator | 2025-06-02 19:28:48.502210 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 19:28:49.217019 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:49.217108 | orchestrator | 2025-06-02 19:28:49.217125 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 19:28:49.217138 | orchestrator | 2025-06-02 19:28:49.217151 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:28:50.630362 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:50.630432 | orchestrator | 2025-06-02 19:28:50.630447 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-02 19:28:51.617729 | orchestrator | changed: [testbed-manager] 2025-06-02 19:28:51.617858 | orchestrator | 2025-06-02 19:28:51.617876 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:28:51.617889 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 19:28:51.617900 | orchestrator | 2025-06-02 19:28:51.883831 | orchestrator | ok: Runtime: 0:09:10.418459 2025-06-02 19:28:51.902274 | 2025-06-02 19:28:51.902432 | TASK [Point out that the log in on the manager is now possible] 2025-06-02 19:28:51.947912 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-02 19:28:51.967021 | 2025-06-02 19:28:51.967230 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 19:28:52.005905 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-02 19:28:52.017397 | 2025-06-02 19:28:52.017535 | TASK [Run manager part 1 + 2] 2025-06-02 19:28:52.909761 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 19:28:52.961720 | orchestrator | 2025-06-02 19:28:52.961822 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-02 19:28:52.961841 | orchestrator | 2025-06-02 19:28:52.961869 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:28:55.930526 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:55.930712 | orchestrator | 2025-06-02 19:28:55.930762 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 19:28:55.977350 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:28:55.977429 | orchestrator | 2025-06-02 19:28:55.977448 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 19:28:56.020268 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:56.020321 | orchestrator | 2025-06-02 19:28:56.020330 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 19:28:56.059499 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:56.059546 | orchestrator | 2025-06-02 19:28:56.059554 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 19:28:56.130609 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:56.130660 | orchestrator | 2025-06-02 19:28:56.130668 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 19:28:56.190051 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:56.190101 | orchestrator | 2025-06-02 19:28:56.190109 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 19:28:56.235739 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-02 19:28:56.235860 | orchestrator | 2025-06-02 19:28:56.235877 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 19:28:56.957398 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:56.957482 | orchestrator | 2025-06-02 19:28:56.957500 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 19:28:57.009925 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:28:57.009989 | orchestrator | 2025-06-02 19:28:57.009997 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 19:28:58.364412 | orchestrator | changed: [testbed-manager] 2025-06-02 19:28:58.364504 | orchestrator | 2025-06-02 19:28:58.364524 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 19:28:58.948348 | orchestrator | ok: [testbed-manager] 2025-06-02 19:28:58.948429 | orchestrator | 2025-06-02 19:28:58.948442 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 19:29:00.046949 | orchestrator | changed: [testbed-manager] 2025-06-02 19:29:00.047033 | orchestrator | 2025-06-02 19:29:00.047051 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 19:29:13.375273 | orchestrator | changed: [testbed-manager] 2025-06-02 19:29:13.375332 | orchestrator | 2025-06-02 19:29:13.375339 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 19:29:14.023023 | orchestrator | ok: [testbed-manager] 2025-06-02 19:29:14.023109 | orchestrator | 2025-06-02 19:29:14.023126 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 19:29:14.072823 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:29:14.072899 | orchestrator | 2025-06-02 19:29:14.072911 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-02 19:29:15.021932 | orchestrator | changed: [testbed-manager] 2025-06-02 19:29:15.021976 | orchestrator | 2025-06-02 19:29:15.021985 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-02 19:29:15.953358 | orchestrator | changed: [testbed-manager] 2025-06-02 19:29:15.953419 | orchestrator | 2025-06-02 19:29:15.953425 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-02 19:29:16.477173 | orchestrator | changed: [testbed-manager] 2025-06-02 19:29:16.477260 | orchestrator | 2025-06-02 19:29:16.477276 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-02 19:29:16.519373 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 19:29:16.519445 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 19:29:16.519453 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 19:29:16.519459 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 19:29:18.389864 | orchestrator | changed: [testbed-manager] 2025-06-02 19:29:18.390003 | orchestrator | 2025-06-02 19:29:18.390054 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-02 19:29:27.498215 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-02 19:29:27.498321 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-02 19:29:27.498339 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-02 19:29:27.498351 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-02 19:29:27.498372 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-02 19:29:27.498384 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-02 19:29:27.498395 | orchestrator | 2025-06-02 19:29:27.498408 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-02 19:29:28.531983 | orchestrator | changed: [testbed-manager] 2025-06-02 19:29:28.532066 | orchestrator | 2025-06-02 19:29:28.532098 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-02 19:29:28.573569 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:29:28.573639 | orchestrator | 2025-06-02 19:29:28.573653 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-02 19:29:31.716100 | orchestrator | changed: [testbed-manager] 2025-06-02 19:29:31.716173 | orchestrator | 2025-06-02 19:29:31.716185 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-02 19:29:31.756474 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:29:31.756547 | orchestrator | 2025-06-02 19:29:31.756569 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-02 19:31:05.720941 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:05.720981 | orchestrator | 2025-06-02 19:31:05.720990 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 19:31:06.833624 | orchestrator | ok: [testbed-manager] 2025-06-02 19:31:06.833663 | orchestrator | 2025-06-02 19:31:06.833671 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:31:06.833679 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-02 19:31:06.833685 | orchestrator | 2025-06-02 19:31:07.147725 | orchestrator | ok: Runtime: 0:02:14.572673 2025-06-02 19:31:07.156804 | 2025-06-02 19:31:07.156922 | TASK [Reboot manager] 2025-06-02 19:31:08.716839 | orchestrator | ok: Runtime: 0:00:00.980503 2025-06-02 19:31:08.732263 | 2025-06-02 19:31:08.732420 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 19:31:25.140901 | orchestrator | ok 2025-06-02 19:31:25.150034 | 2025-06-02 19:31:25.150168 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 19:32:25.198350 | orchestrator | ok 2025-06-02 19:32:25.210412 | 2025-06-02 19:32:25.210595 | TASK [Deploy manager + bootstrap nodes] 2025-06-02 19:32:27.813444 | orchestrator | 2025-06-02 19:32:27.813630 | orchestrator | # DEPLOY MANAGER 2025-06-02 19:32:27.813654 | orchestrator | 2025-06-02 19:32:27.813668 | orchestrator | + set -e 2025-06-02 19:32:27.813681 | orchestrator | + echo 2025-06-02 19:32:27.813694 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-02 19:32:27.813741 | orchestrator | + echo 2025-06-02 19:32:27.813794 | orchestrator | + cat /opt/manager-vars.sh 2025-06-02 19:32:27.816866 | orchestrator | export NUMBER_OF_NODES=6 2025-06-02 19:32:27.816920 | orchestrator | 2025-06-02 19:32:27.816935 | orchestrator | export CEPH_VERSION=reef 2025-06-02 19:32:27.816948 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-02 19:32:27.816961 | orchestrator | export MANAGER_VERSION=latest 2025-06-02 19:32:27.816984 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-02 19:32:27.816995 | orchestrator | 2025-06-02 19:32:27.817014 | orchestrator | export ARA=false 2025-06-02 19:32:27.817028 | orchestrator | export DEPLOY_MODE=manager 2025-06-02 19:32:27.817056 | orchestrator | export TEMPEST=false 2025-06-02 19:32:27.817073 | orchestrator | export IS_ZUUL=true 2025-06-02 19:32:27.817085 | orchestrator | 2025-06-02 19:32:27.817102 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 19:32:27.817114 | orchestrator | export EXTERNAL_API=false 2025-06-02 19:32:27.817124 | orchestrator | 2025-06-02 19:32:27.817135 | orchestrator | export IMAGE_USER=ubuntu 2025-06-02 19:32:27.817148 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-02 19:32:27.817159 | orchestrator | 2025-06-02 19:32:27.817169 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-02 19:32:27.817187 | orchestrator | 2025-06-02 19:32:27.817198 | orchestrator | + echo 2025-06-02 19:32:27.817210 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 19:32:27.818004 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 19:32:27.818090 | orchestrator | ++ INTERACTIVE=false 2025-06-02 19:32:27.818138 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 19:32:27.818156 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 19:32:27.818182 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 19:32:27.818219 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 19:32:27.818230 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 19:32:27.818241 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 19:32:27.818275 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 19:32:27.818294 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 19:32:27.818315 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 19:32:27.818328 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 19:32:27.818338 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 19:32:27.818349 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 19:32:27.818369 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 19:32:27.818379 | orchestrator | ++ export ARA=false 2025-06-02 19:32:27.818390 | orchestrator | ++ ARA=false 2025-06-02 19:32:27.818401 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 19:32:27.818411 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 19:32:27.818426 | orchestrator | ++ export TEMPEST=false 2025-06-02 19:32:27.818437 | orchestrator | ++ TEMPEST=false 2025-06-02 19:32:27.818447 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 19:32:27.818458 | orchestrator | ++ IS_ZUUL=true 2025-06-02 19:32:27.818468 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 19:32:27.818479 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 19:32:27.818490 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 19:32:27.818500 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 19:32:27.818510 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 19:32:27.818521 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 19:32:27.818532 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 19:32:27.818542 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 19:32:27.818553 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 19:32:27.818563 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 19:32:27.818574 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-02 19:32:27.871430 | orchestrator | + docker version 2025-06-02 19:32:28.132590 | orchestrator | Client: Docker Engine - Community 2025-06-02 19:32:28.132696 | orchestrator | Version: 27.5.1 2025-06-02 19:32:28.132778 | orchestrator | API version: 1.47 2025-06-02 19:32:28.132790 | orchestrator | Go version: go1.22.11 2025-06-02 19:32:28.132801 | orchestrator | Git commit: 9f9e405 2025-06-02 19:32:28.132812 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 19:32:28.132824 | orchestrator | OS/Arch: linux/amd64 2025-06-02 19:32:28.132835 | orchestrator | Context: default 2025-06-02 19:32:28.132845 | orchestrator | 2025-06-02 19:32:28.132856 | orchestrator | Server: Docker Engine - Community 2025-06-02 19:32:28.132867 | orchestrator | Engine: 2025-06-02 19:32:28.132878 | orchestrator | Version: 27.5.1 2025-06-02 19:32:28.132889 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-02 19:32:28.132929 | orchestrator | Go version: go1.22.11 2025-06-02 19:32:28.132941 | orchestrator | Git commit: 4c9b3b0 2025-06-02 19:32:28.132951 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 19:32:28.132962 | orchestrator | OS/Arch: linux/amd64 2025-06-02 19:32:28.132973 | orchestrator | Experimental: false 2025-06-02 19:32:28.132983 | orchestrator | containerd: 2025-06-02 19:32:28.132994 | orchestrator | Version: 1.7.27 2025-06-02 19:32:28.133005 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-02 19:32:28.133016 | orchestrator | runc: 2025-06-02 19:32:28.133027 | orchestrator | Version: 1.2.5 2025-06-02 19:32:28.133037 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-02 19:32:28.133048 | orchestrator | docker-init: 2025-06-02 19:32:28.133058 | orchestrator | Version: 0.19.0 2025-06-02 19:32:28.133070 | orchestrator | GitCommit: de40ad0 2025-06-02 19:32:28.136309 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-02 19:32:28.145904 | orchestrator | + set -e 2025-06-02 19:32:28.145938 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 19:32:28.145982 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 19:32:28.146004 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 19:32:28.146089 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 19:32:28.146136 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 19:32:28.146187 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 19:32:28.146200 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 19:32:28.146211 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 19:32:28.146221 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 19:32:28.146232 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 19:32:28.146243 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 19:32:28.146253 | orchestrator | ++ export ARA=false 2025-06-02 19:32:28.146264 | orchestrator | ++ ARA=false 2025-06-02 19:32:28.146275 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 19:32:28.146285 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 19:32:28.146295 | orchestrator | ++ export TEMPEST=false 2025-06-02 19:32:28.146306 | orchestrator | ++ TEMPEST=false 2025-06-02 19:32:28.146316 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 19:32:28.146327 | orchestrator | ++ IS_ZUUL=true 2025-06-02 19:32:28.146337 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 19:32:28.146355 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 19:32:28.146366 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 19:32:28.146377 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 19:32:28.146387 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 19:32:28.146398 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 19:32:28.146408 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 19:32:28.146419 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 19:32:28.146430 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 19:32:28.146440 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 19:32:28.146450 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 19:32:28.146461 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 19:32:28.146472 | orchestrator | ++ INTERACTIVE=false 2025-06-02 19:32:28.146482 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 19:32:28.146497 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 19:32:28.146508 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 19:32:28.146518 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 19:32:28.146529 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-06-02 19:32:28.154078 | orchestrator | + set -e 2025-06-02 19:32:28.154123 | orchestrator | + VERSION=reef 2025-06-02 19:32:28.155397 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-02 19:32:28.160759 | orchestrator | + [[ -n ceph_version: reef ]] 2025-06-02 19:32:28.160789 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-06-02 19:32:28.166306 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-06-02 19:32:28.172250 | orchestrator | + set -e 2025-06-02 19:32:28.172822 | orchestrator | + VERSION=2024.2 2025-06-02 19:32:28.173550 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-02 19:32:28.177717 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-06-02 19:32:28.177751 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-06-02 19:32:28.182865 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-02 19:32:28.184104 | orchestrator | ++ semver latest 7.0.0 2025-06-02 19:32:28.243319 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 19:32:28.243377 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 19:32:28.243390 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-02 19:32:28.243402 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-02 19:32:28.283258 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 19:32:28.286406 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 19:32:28.287573 | orchestrator | ++ deactivate nondestructive 2025-06-02 19:32:28.287685 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:32:28.287740 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:32:28.287753 | orchestrator | ++ hash -r 2025-06-02 19:32:28.287765 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:32:28.287776 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 19:32:28.287808 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 19:32:28.287821 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 19:32:28.287833 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 19:32:28.287845 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 19:32:28.287856 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 19:32:28.287867 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 19:32:28.287878 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 19:32:28.287894 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 19:32:28.287905 | orchestrator | ++ export PATH 2025-06-02 19:32:28.287916 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:32:28.287939 | orchestrator | ++ '[' -z '' ']' 2025-06-02 19:32:28.287956 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 19:32:28.287967 | orchestrator | ++ PS1='(venv) ' 2025-06-02 19:32:28.287977 | orchestrator | ++ export PS1 2025-06-02 19:32:28.287988 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 19:32:28.288055 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 19:32:28.288075 | orchestrator | ++ hash -r 2025-06-02 19:32:28.288111 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-02 19:32:29.584064 | orchestrator | 2025-06-02 19:32:29.584179 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-02 19:32:29.584196 | orchestrator | 2025-06-02 19:32:29.584209 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 19:32:30.179900 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:30.180009 | orchestrator | 2025-06-02 19:32:30.180024 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 19:32:31.203755 | orchestrator | changed: [testbed-manager] 2025-06-02 19:32:31.203864 | orchestrator | 2025-06-02 19:32:31.203881 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-02 19:32:31.203893 | orchestrator | 2025-06-02 19:32:31.203905 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:32:33.691409 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:33.691530 | orchestrator | 2025-06-02 19:32:33.691547 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-02 19:32:33.749034 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:33.749123 | orchestrator | 2025-06-02 19:32:33.749139 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-02 19:32:34.218991 | orchestrator | changed: [testbed-manager] 2025-06-02 19:32:34.219100 | orchestrator | 2025-06-02 19:32:34.219116 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-02 19:32:34.262772 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:32:34.262831 | orchestrator | 2025-06-02 19:32:34.262845 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 19:32:34.625887 | orchestrator | changed: [testbed-manager] 2025-06-02 19:32:34.625992 | orchestrator | 2025-06-02 19:32:34.626008 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-02 19:32:34.684810 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:32:34.684909 | orchestrator | 2025-06-02 19:32:34.684923 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-02 19:32:35.016648 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:35.016818 | orchestrator | 2025-06-02 19:32:35.016836 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-02 19:32:35.141662 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:32:35.142645 | orchestrator | 2025-06-02 19:32:35.142687 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-02 19:32:35.142727 | orchestrator | 2025-06-02 19:32:35.142741 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:32:36.963677 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:36.963834 | orchestrator | 2025-06-02 19:32:36.963851 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-02 19:32:37.069973 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-02 19:32:37.070205 | orchestrator | 2025-06-02 19:32:37.070222 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-02 19:32:37.126388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-02 19:32:37.126444 | orchestrator | 2025-06-02 19:32:37.126456 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-02 19:32:38.202330 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-02 19:32:38.202429 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-02 19:32:38.202441 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-02 19:32:38.202452 | orchestrator | 2025-06-02 19:32:38.202463 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-02 19:32:40.093863 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-02 19:32:40.093983 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-02 19:32:40.094002 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-02 19:32:40.094062 | orchestrator | 2025-06-02 19:32:40.094076 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-02 19:32:40.745330 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:32:40.745444 | orchestrator | changed: [testbed-manager] 2025-06-02 19:32:40.745460 | orchestrator | 2025-06-02 19:32:40.745473 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-02 19:32:41.410998 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:32:41.411102 | orchestrator | changed: [testbed-manager] 2025-06-02 19:32:41.411118 | orchestrator | 2025-06-02 19:32:41.411131 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-02 19:32:41.472506 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:32:41.472581 | orchestrator | 2025-06-02 19:32:41.472594 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-02 19:32:41.917494 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:41.917611 | orchestrator | 2025-06-02 19:32:41.917627 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-02 19:32:41.990741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-02 19:32:41.990846 | orchestrator | 2025-06-02 19:32:41.990863 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-02 19:32:43.102910 | orchestrator | changed: [testbed-manager] 2025-06-02 19:32:43.103014 | orchestrator | 2025-06-02 19:32:43.103030 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-02 19:32:43.883600 | orchestrator | changed: [testbed-manager] 2025-06-02 19:32:43.883750 | orchestrator | 2025-06-02 19:32:43.883767 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-02 19:32:55.240654 | orchestrator | changed: [testbed-manager] 2025-06-02 19:32:55.240842 | orchestrator | 2025-06-02 19:32:55.240861 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-02 19:32:55.297807 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:32:55.297902 | orchestrator | 2025-06-02 19:32:55.297919 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-02 19:32:55.297932 | orchestrator | 2025-06-02 19:32:55.297943 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:32:57.076478 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:57.076584 | orchestrator | 2025-06-02 19:32:57.076632 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-02 19:32:57.172989 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-02 19:32:57.173084 | orchestrator | 2025-06-02 19:32:57.173097 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-02 19:32:57.242897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 19:32:57.242950 | orchestrator | 2025-06-02 19:32:57.242963 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-02 19:32:59.738637 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:59.738811 | orchestrator | 2025-06-02 19:32:59.738827 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-02 19:32:59.794778 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:59.794868 | orchestrator | 2025-06-02 19:32:59.794884 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-02 19:32:59.925396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-02 19:32:59.925520 | orchestrator | 2025-06-02 19:32:59.925535 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-02 19:33:02.773856 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-02 19:33:02.773992 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-02 19:33:02.774076 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-02 19:33:02.774091 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-02 19:33:02.774102 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-02 19:33:02.774114 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-02 19:33:02.774125 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-02 19:33:02.774169 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-02 19:33:02.774181 | orchestrator | 2025-06-02 19:33:02.774194 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-02 19:33:03.420512 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:03.420614 | orchestrator | 2025-06-02 19:33:03.420629 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-02 19:33:04.094978 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:04.095082 | orchestrator | 2025-06-02 19:33:04.095097 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-02 19:33:04.167250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-02 19:33:04.167341 | orchestrator | 2025-06-02 19:33:04.167355 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-02 19:33:05.424325 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-02 19:33:05.425085 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-02 19:33:05.425116 | orchestrator | 2025-06-02 19:33:05.425130 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-02 19:33:06.051407 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:06.051556 | orchestrator | 2025-06-02 19:33:06.051583 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-02 19:33:06.098416 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:33:06.098516 | orchestrator | 2025-06-02 19:33:06.098532 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-02 19:33:06.148081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-02 19:33:06.148167 | orchestrator | 2025-06-02 19:33:06.148181 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-02 19:33:07.387136 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:33:07.387325 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:33:07.387343 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:07.387356 | orchestrator | 2025-06-02 19:33:07.387369 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-02 19:33:07.977236 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:07.977338 | orchestrator | 2025-06-02 19:33:07.977353 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-02 19:33:08.033195 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:33:08.033292 | orchestrator | 2025-06-02 19:33:08.033307 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-02 19:33:08.104644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-02 19:33:08.104815 | orchestrator | 2025-06-02 19:33:08.104833 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-02 19:33:08.611158 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:08.611260 | orchestrator | 2025-06-02 19:33:08.611275 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-02 19:33:08.988895 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:08.989000 | orchestrator | 2025-06-02 19:33:08.989015 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-02 19:33:10.124786 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-02 19:33:10.124894 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-02 19:33:10.124909 | orchestrator | 2025-06-02 19:33:10.124922 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-02 19:33:10.701155 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:10.701270 | orchestrator | 2025-06-02 19:33:10.701288 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-02 19:33:11.082930 | orchestrator | ok: [testbed-manager] 2025-06-02 19:33:11.083025 | orchestrator | 2025-06-02 19:33:11.083036 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-02 19:33:11.420499 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:11.420602 | orchestrator | 2025-06-02 19:33:11.420619 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-02 19:33:11.468451 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:33:11.468543 | orchestrator | 2025-06-02 19:33:11.468558 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-02 19:33:11.530798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-02 19:33:11.530879 | orchestrator | 2025-06-02 19:33:11.530893 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-02 19:33:11.573332 | orchestrator | ok: [testbed-manager] 2025-06-02 19:33:11.573403 | orchestrator | 2025-06-02 19:33:11.573416 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-02 19:33:13.416646 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-02 19:33:13.416794 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-02 19:33:13.416810 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-02 19:33:13.416823 | orchestrator | 2025-06-02 19:33:13.416835 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-02 19:33:14.053739 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:14.053860 | orchestrator | 2025-06-02 19:33:14.053887 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-02 19:33:14.689928 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:14.690097 | orchestrator | 2025-06-02 19:33:14.690118 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-02 19:33:15.330287 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:15.330396 | orchestrator | 2025-06-02 19:33:15.330412 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-02 19:33:15.404259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-02 19:33:15.404362 | orchestrator | 2025-06-02 19:33:15.404376 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-02 19:33:15.443452 | orchestrator | ok: [testbed-manager] 2025-06-02 19:33:15.443525 | orchestrator | 2025-06-02 19:33:15.443539 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-02 19:33:16.056194 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-02 19:33:16.056306 | orchestrator | 2025-06-02 19:33:16.056322 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-02 19:33:16.139327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-02 19:33:16.139430 | orchestrator | 2025-06-02 19:33:16.139445 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-02 19:33:16.756139 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:16.756242 | orchestrator | 2025-06-02 19:33:16.756256 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-02 19:33:17.302166 | orchestrator | ok: [testbed-manager] 2025-06-02 19:33:17.302269 | orchestrator | 2025-06-02 19:33:17.302285 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-02 19:33:17.352334 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:33:17.352417 | orchestrator | 2025-06-02 19:33:17.352430 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-02 19:33:17.408283 | orchestrator | ok: [testbed-manager] 2025-06-02 19:33:17.408363 | orchestrator | 2025-06-02 19:33:17.408376 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-02 19:33:18.224376 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:18.224481 | orchestrator | 2025-06-02 19:33:18.224498 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-02 19:34:18.070885 | orchestrator | changed: [testbed-manager] 2025-06-02 19:34:18.071029 | orchestrator | 2025-06-02 19:34:18.071047 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-02 19:34:19.060227 | orchestrator | ok: [testbed-manager] 2025-06-02 19:34:19.060335 | orchestrator | 2025-06-02 19:34:19.060350 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-02 19:34:19.112912 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:34:19.112991 | orchestrator | 2025-06-02 19:34:19.113005 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-02 19:34:22.861301 | orchestrator | changed: [testbed-manager] 2025-06-02 19:34:22.861421 | orchestrator | 2025-06-02 19:34:22.861441 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-02 19:34:22.921193 | orchestrator | ok: [testbed-manager] 2025-06-02 19:34:22.921281 | orchestrator | 2025-06-02 19:34:22.921296 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 19:34:22.921308 | orchestrator | 2025-06-02 19:34:22.921320 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-02 19:34:22.967417 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:34:22.967506 | orchestrator | 2025-06-02 19:34:22.967522 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-02 19:35:23.023136 | orchestrator | Pausing for 60 seconds 2025-06-02 19:35:23.023248 | orchestrator | changed: [testbed-manager] 2025-06-02 19:35:23.023263 | orchestrator | 2025-06-02 19:35:23.023277 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-02 19:35:27.148481 | orchestrator | changed: [testbed-manager] 2025-06-02 19:35:27.148592 | orchestrator | 2025-06-02 19:35:27.148608 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-02 19:36:08.694093 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-02 19:36:08.694215 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-02 19:36:08.694231 | orchestrator | changed: [testbed-manager] 2025-06-02 19:36:08.694244 | orchestrator | 2025-06-02 19:36:08.694256 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-02 19:36:17.298753 | orchestrator | changed: [testbed-manager] 2025-06-02 19:36:17.298876 | orchestrator | 2025-06-02 19:36:17.298893 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-02 19:36:17.387433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-02 19:36:17.387530 | orchestrator | 2025-06-02 19:36:17.387545 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 19:36:17.387557 | orchestrator | 2025-06-02 19:36:17.387568 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-02 19:36:17.439718 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:36:17.439786 | orchestrator | 2025-06-02 19:36:17.439799 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:36:17.439811 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 19:36:17.439823 | orchestrator | 2025-06-02 19:36:17.536384 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 19:36:17.536466 | orchestrator | + deactivate 2025-06-02 19:36:17.536480 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 19:36:17.536494 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 19:36:17.536505 | orchestrator | + export PATH 2025-06-02 19:36:17.536516 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 19:36:17.536527 | orchestrator | + '[' -n '' ']' 2025-06-02 19:36:17.536538 | orchestrator | + hash -r 2025-06-02 19:36:17.536549 | orchestrator | + '[' -n '' ']' 2025-06-02 19:36:17.536559 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 19:36:17.536570 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 19:36:17.536601 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 19:36:17.536612 | orchestrator | + unset -f deactivate 2025-06-02 19:36:17.536624 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-02 19:36:17.546332 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 19:36:17.546374 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 19:36:17.546386 | orchestrator | + local max_attempts=60 2025-06-02 19:36:17.546397 | orchestrator | + local name=ceph-ansible 2025-06-02 19:36:17.546407 | orchestrator | + local attempt_num=1 2025-06-02 19:36:17.546925 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:36:17.585644 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:36:17.585721 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 19:36:17.585738 | orchestrator | + local max_attempts=60 2025-06-02 19:36:17.585750 | orchestrator | + local name=kolla-ansible 2025-06-02 19:36:17.585761 | orchestrator | + local attempt_num=1 2025-06-02 19:36:17.586492 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 19:36:17.623475 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:36:17.623516 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 19:36:17.623530 | orchestrator | + local max_attempts=60 2025-06-02 19:36:17.623541 | orchestrator | + local name=osism-ansible 2025-06-02 19:36:17.623552 | orchestrator | + local attempt_num=1 2025-06-02 19:36:17.624137 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 19:36:17.663403 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:36:17.663471 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 19:36:17.663483 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 19:36:18.383466 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-02 19:36:18.572613 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-02 19:36:18.572800 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-02 19:36:18.572821 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-02 19:36:18.572834 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-02 19:36:18.572847 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-02 19:36:18.572897 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-02 19:36:18.572910 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-02 19:36:18.572921 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-06-02 19:36:18.572931 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-02 19:36:18.572942 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-02 19:36:18.572953 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-02 19:36:18.572964 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-02 19:36:18.572974 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-02 19:36:18.572985 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-02 19:36:18.572996 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-02 19:36:18.580190 | orchestrator | ++ semver latest 7.0.0 2025-06-02 19:36:18.630474 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 19:36:18.630558 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 19:36:18.630572 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-02 19:36:18.635913 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-02 19:36:20.401710 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:36:20.401827 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:36:20.401844 | orchestrator | Registering Redlock._release_script 2025-06-02 19:36:20.590304 | orchestrator | 2025-06-02 19:36:20 | INFO  | Task 6b108829-2a1f-4a9f-a821-2235eae41d2e (resolvconf) was prepared for execution. 2025-06-02 19:36:20.590405 | orchestrator | 2025-06-02 19:36:20 | INFO  | It takes a moment until task 6b108829-2a1f-4a9f-a821-2235eae41d2e (resolvconf) has been started and output is visible here. 2025-06-02 19:36:24.479718 | orchestrator | 2025-06-02 19:36:24.480661 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-02 19:36:24.481370 | orchestrator | 2025-06-02 19:36:24.482902 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:36:24.483766 | orchestrator | Monday 02 June 2025 19:36:24 +0000 (0:00:00.146) 0:00:00.146 *********** 2025-06-02 19:36:28.016895 | orchestrator | ok: [testbed-manager] 2025-06-02 19:36:28.017243 | orchestrator | 2025-06-02 19:36:28.019007 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 19:36:28.020022 | orchestrator | Monday 02 June 2025 19:36:28 +0000 (0:00:03.539) 0:00:03.686 *********** 2025-06-02 19:36:28.071104 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:36:28.071787 | orchestrator | 2025-06-02 19:36:28.072771 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 19:36:28.073872 | orchestrator | Monday 02 June 2025 19:36:28 +0000 (0:00:00.054) 0:00:03.740 *********** 2025-06-02 19:36:28.151841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-02 19:36:28.152261 | orchestrator | 2025-06-02 19:36:28.152904 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 19:36:28.153502 | orchestrator | Monday 02 June 2025 19:36:28 +0000 (0:00:00.080) 0:00:03.821 *********** 2025-06-02 19:36:28.244147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 19:36:28.244441 | orchestrator | 2025-06-02 19:36:28.245460 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 19:36:28.246537 | orchestrator | Monday 02 June 2025 19:36:28 +0000 (0:00:00.089) 0:00:03.911 *********** 2025-06-02 19:36:29.293337 | orchestrator | ok: [testbed-manager] 2025-06-02 19:36:29.293930 | orchestrator | 2025-06-02 19:36:29.293956 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 19:36:29.294224 | orchestrator | Monday 02 June 2025 19:36:29 +0000 (0:00:01.050) 0:00:04.962 *********** 2025-06-02 19:36:29.359143 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:36:29.360978 | orchestrator | 2025-06-02 19:36:29.361018 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 19:36:29.361510 | orchestrator | Monday 02 June 2025 19:36:29 +0000 (0:00:00.066) 0:00:05.028 *********** 2025-06-02 19:36:29.850461 | orchestrator | ok: [testbed-manager] 2025-06-02 19:36:29.850793 | orchestrator | 2025-06-02 19:36:29.851172 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 19:36:29.851595 | orchestrator | Monday 02 June 2025 19:36:29 +0000 (0:00:00.490) 0:00:05.519 *********** 2025-06-02 19:36:29.927383 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:36:29.928023 | orchestrator | 2025-06-02 19:36:29.928626 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 19:36:29.929789 | orchestrator | Monday 02 June 2025 19:36:29 +0000 (0:00:00.077) 0:00:05.596 *********** 2025-06-02 19:36:30.472728 | orchestrator | changed: [testbed-manager] 2025-06-02 19:36:30.473746 | orchestrator | 2025-06-02 19:36:30.474366 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 19:36:30.475309 | orchestrator | Monday 02 June 2025 19:36:30 +0000 (0:00:00.545) 0:00:06.142 *********** 2025-06-02 19:36:31.477916 | orchestrator | changed: [testbed-manager] 2025-06-02 19:36:31.478458 | orchestrator | 2025-06-02 19:36:31.478839 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 19:36:31.479277 | orchestrator | Monday 02 June 2025 19:36:31 +0000 (0:00:01.003) 0:00:07.145 *********** 2025-06-02 19:36:32.392525 | orchestrator | ok: [testbed-manager] 2025-06-02 19:36:32.392626 | orchestrator | 2025-06-02 19:36:32.392641 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 19:36:32.392728 | orchestrator | Monday 02 June 2025 19:36:32 +0000 (0:00:00.914) 0:00:08.060 *********** 2025-06-02 19:36:32.479439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-02 19:36:32.479547 | orchestrator | 2025-06-02 19:36:32.479564 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 19:36:32.479577 | orchestrator | Monday 02 June 2025 19:36:32 +0000 (0:00:00.087) 0:00:08.147 *********** 2025-06-02 19:36:33.590900 | orchestrator | changed: [testbed-manager] 2025-06-02 19:36:33.591008 | orchestrator | 2025-06-02 19:36:33.591313 | orchestrator | 2025-06-02 19:36:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:36:33.591338 | orchestrator | 2025-06-02 19:36:33 | INFO  | Please wait and do not abort execution. 2025-06-02 19:36:33.591839 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:36:33.592436 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 19:36:33.592815 | orchestrator | 2025-06-02 19:36:33.593603 | orchestrator | 2025-06-02 19:36:33.594107 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:36:33.594316 | orchestrator | Monday 02 June 2025 19:36:33 +0000 (0:00:01.111) 0:00:09.259 *********** 2025-06-02 19:36:33.594866 | orchestrator | =============================================================================== 2025-06-02 19:36:33.595170 | orchestrator | Gathering Facts --------------------------------------------------------- 3.54s 2025-06-02 19:36:33.595870 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.11s 2025-06-02 19:36:33.596759 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2025-06-02 19:36:33.596779 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.00s 2025-06-02 19:36:33.597029 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.91s 2025-06-02 19:36:33.598100 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-06-02 19:36:33.598223 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-06-02 19:36:33.598320 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-06-02 19:36:33.598711 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-06-02 19:36:33.598940 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-02 19:36:33.599352 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-02 19:36:33.599903 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-06-02 19:36:33.600244 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-06-02 19:36:34.022523 | orchestrator | + osism apply sshconfig 2025-06-02 19:36:35.690662 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:36:35.690832 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:36:35.690848 | orchestrator | Registering Redlock._release_script 2025-06-02 19:36:35.746256 | orchestrator | 2025-06-02 19:36:35 | INFO  | Task 39424650-bc6c-4765-bcfc-3110cf4fc37b (sshconfig) was prepared for execution. 2025-06-02 19:36:35.746343 | orchestrator | 2025-06-02 19:36:35 | INFO  | It takes a moment until task 39424650-bc6c-4765-bcfc-3110cf4fc37b (sshconfig) has been started and output is visible here. 2025-06-02 19:36:39.576446 | orchestrator | 2025-06-02 19:36:39.577053 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-02 19:36:39.578342 | orchestrator | 2025-06-02 19:36:39.580042 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-02 19:36:39.580794 | orchestrator | Monday 02 June 2025 19:36:39 +0000 (0:00:00.160) 0:00:00.160 *********** 2025-06-02 19:36:40.097451 | orchestrator | ok: [testbed-manager] 2025-06-02 19:36:40.097615 | orchestrator | 2025-06-02 19:36:40.097763 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-02 19:36:40.098143 | orchestrator | Monday 02 June 2025 19:36:40 +0000 (0:00:00.524) 0:00:00.685 *********** 2025-06-02 19:36:40.577394 | orchestrator | changed: [testbed-manager] 2025-06-02 19:36:40.577623 | orchestrator | 2025-06-02 19:36:40.578782 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-02 19:36:40.579250 | orchestrator | Monday 02 June 2025 19:36:40 +0000 (0:00:00.479) 0:00:01.164 *********** 2025-06-02 19:36:46.235857 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-02 19:36:46.236598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-02 19:36:46.237564 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-02 19:36:46.238587 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-02 19:36:46.239332 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-02 19:36:46.240324 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-02 19:36:46.241043 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-02 19:36:46.241495 | orchestrator | 2025-06-02 19:36:46.242118 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-02 19:36:46.242748 | orchestrator | Monday 02 June 2025 19:36:46 +0000 (0:00:05.656) 0:00:06.820 *********** 2025-06-02 19:36:46.295098 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:36:46.295192 | orchestrator | 2025-06-02 19:36:46.295207 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-02 19:36:46.296238 | orchestrator | Monday 02 June 2025 19:36:46 +0000 (0:00:00.061) 0:00:06.881 *********** 2025-06-02 19:36:46.867204 | orchestrator | changed: [testbed-manager] 2025-06-02 19:36:46.868918 | orchestrator | 2025-06-02 19:36:46.870133 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:36:46.870571 | orchestrator | 2025-06-02 19:36:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:36:46.870594 | orchestrator | 2025-06-02 19:36:46 | INFO  | Please wait and do not abort execution. 2025-06-02 19:36:46.871550 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:36:46.872244 | orchestrator | 2025-06-02 19:36:46.873597 | orchestrator | 2025-06-02 19:36:46.874335 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:36:46.874844 | orchestrator | Monday 02 June 2025 19:36:46 +0000 (0:00:00.573) 0:00:07.455 *********** 2025-06-02 19:36:46.875368 | orchestrator | =============================================================================== 2025-06-02 19:36:46.876025 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.66s 2025-06-02 19:36:46.876573 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-06-02 19:36:46.877053 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.52s 2025-06-02 19:36:46.877538 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2025-06-02 19:36:46.877984 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-06-02 19:36:47.323869 | orchestrator | + osism apply known-hosts 2025-06-02 19:36:48.911630 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:36:48.911781 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:36:48.911797 | orchestrator | Registering Redlock._release_script 2025-06-02 19:36:48.969593 | orchestrator | 2025-06-02 19:36:48 | INFO  | Task be8803a5-009a-40fa-952c-40589a7706a4 (known-hosts) was prepared for execution. 2025-06-02 19:36:48.969723 | orchestrator | 2025-06-02 19:36:48 | INFO  | It takes a moment until task be8803a5-009a-40fa-952c-40589a7706a4 (known-hosts) has been started and output is visible here. 2025-06-02 19:36:52.837762 | orchestrator | 2025-06-02 19:36:52.838093 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-02 19:36:52.839781 | orchestrator | 2025-06-02 19:36:52.839822 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-02 19:36:52.840389 | orchestrator | Monday 02 June 2025 19:36:52 +0000 (0:00:00.161) 0:00:00.161 *********** 2025-06-02 19:36:58.765504 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 19:36:58.766958 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 19:36:58.766992 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 19:36:58.767573 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 19:36:58.768278 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 19:36:58.770081 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 19:36:58.770385 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 19:36:58.770870 | orchestrator | 2025-06-02 19:36:58.771396 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-02 19:36:58.771932 | orchestrator | Monday 02 June 2025 19:36:58 +0000 (0:00:05.928) 0:00:06.090 *********** 2025-06-02 19:36:58.931033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 19:36:58.931170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 19:36:58.932249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 19:36:58.932669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 19:36:58.933652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 19:36:58.934085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 19:36:58.934544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 19:36:58.934931 | orchestrator | 2025-06-02 19:36:58.935501 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:36:58.936106 | orchestrator | Monday 02 June 2025 19:36:58 +0000 (0:00:00.167) 0:00:06.257 *********** 2025-06-02 19:37:00.095441 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfI2oRZyoTXxJkalXSHXNJSPBRUyfp32ztXysPXuyFs) 2025-06-02 19:37:00.095673 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDC7zPdedr4Gtfv+5MB/IPNiGpfFuoXcrTeBwpSKZmu5Qy9xoiVfJ/xlfrGIsiDIE1qYl8NRHf49GuSSVKgdElo4KGNtqb0zR2v0XDQJWcxOFGvtmPAyO5qCzqpf5uftN9/LjDfFFW6IlVnt+qXV6cYa5Ito6VbmJz13guBE0TB+spwE1DFU9uPiIl2TmBq9XPlNKTZ0QEoB8jRRoEdZX0OEWz0RUZDp9+BnUC313cm9yD97ONwOUNmRmttgQVJKdRO8pHrF5CaYTYawAD7bMBEWGcKfJ05YGCqvaFKoRvU4DV0InfdjUC4e/moA6LtQ59cHO30JhjjCtkd9QCyZ7kSDqevr1fzmqEB5yK6opT6LYO6aeU/DgbsDtR33eoDRekTcyJWa/3Vj+dEDFUeRQ0yjjzXbLCjlLGPiect5FbzBiyBE66WreIvWjnS+Jk/v1iWUuILfaul3oxZ+hw+nWg3wKqfO72mcdsBgHS7eRKKjMrk24cWv5xyLUNRkJmVzi0=) 2025-06-02 19:37:00.096260 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMwWCflKm8U3pZj2qe72H4Cn1oTLpbTSRzEV+pZ2IxezI9VAJ1uVM5Yk8OBy2GnWTidmvd2oThbf35DGKbfl40k=) 2025-06-02 19:37:00.097186 | orchestrator | 2025-06-02 19:37:00.097787 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:00.098785 | orchestrator | Monday 02 June 2025 19:37:00 +0000 (0:00:01.163) 0:00:07.420 *********** 2025-06-02 19:37:01.174284 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMMWfJqYkXAyqyd9rkba6xkAPGAy1hBGcDC9KN5ABHer) 2025-06-02 19:37:01.175174 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNv+la+q5WsHWXLVBSrrxxgFshGgkf90qCluW7N00NRKkNMTAHgfsLOUkTtpKKLiMNR7brrxpTXeh2jm/8jM0O/J056zIrYGcyfByVb6Jv7D6RmdWkzIg4Le4aCNx9PCnxwxPuPjWQOqts8/FeQjsuKy1zY2s2jpl2CUQWEQU8pBIOCrk+OnlMfQOgOIy4RYn9hPnfNCYog7YS4IA1EtL7m36ByRgns3prSlSk+9heAIHxSKJEUTh80ivqPRH7Vbihu4GRJ4bQ0ZbjZw4W57iZwtdG4QNRroGfPHd52sWSceLbIVcdvYzqiSkX7t7Y2meA0rGNnFjNdHSl4WF3ZZJfTWb/7PsxPrKKrknh5bAXwuC1bjQmV4Y32RStygJ2CoQ02AnWfAv5D6/oN2G/Ht37vwBaYPFkcypFtffVqx+nAaCZiIeqoKh+vw5IijFjLrE0QnRgInDC51JvUOIc4vojST3ETNVFSYRHVnIC3MmvKg0PHWpvoTPx26iEdNOqmeM=) 2025-06-02 19:37:01.176068 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrYBxr6cmxb+cgYqBa1nLtaX0pUZy6k3cX04EzdlDL5dYV/tkRUrOZwhJpiYHtFIkmaT8gWiYAfZ3DTAlnQRNg=) 2025-06-02 19:37:01.176878 | orchestrator | 2025-06-02 19:37:01.177636 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:01.178311 | orchestrator | Monday 02 June 2025 19:37:01 +0000 (0:00:01.079) 0:00:08.499 *********** 2025-06-02 19:37:02.199488 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO2oYeM0bZP34miPB8I3laHTY4mihbXEO0BEUANSpRpC5uUYcaNAeKiKFX9by3r2A76Kg1Sd9+n3JGGnqbyxSM4=) 2025-06-02 19:37:02.199591 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINvrmpuK5xFVbGGGkUOgZAVGXK2+qWfegpXPulW/zQOF) 2025-06-02 19:37:02.200134 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4lCGrd4LXNokfvPoIC7wogtoS7BPjHnfZYigMc6WzxlaBJtgC0W+K+3OHOH/xUfaf4yfXcsOlv0Bdu4zPV+0ezDgXvqgnaBqTnjS5o524Kob5LOwLrqp8fKo4OxL8mKWqm5d1sCdWG017mFsfpcvtmbCWHci4I1SqLxCWB9iqDtTfUXt/yNgAshdk3knOM8Wvh7z5hKGVJqdvVTq41xlJRjfxY8CecbaGMYMe4gLiKp8XwSapdR67XvW7L8lPw+W8YbCtXYXLNjwXR3rE0idemO1w4TE0M4zcKkt9cxHRQXGA/qb1g407fzfwaGVi+6nUuuCsUnue5Z00qF7JjUvpQ92sMISpk2guvwfLsMJIsfXLEjoI99Kl+v6R/V2cALCqNP8BaRaoMsHV6bTx6qCwUnwyfaU2sdR+XXrxHGJ17Z7Ria8mensYX7RdN3lf0FxIEbQh4fMVvqPMBL3+92INNWO4Cuz8royMCpnPgFPiXGa59PdCbWUYNJzQ/A+I4HU=) 2025-06-02 19:37:02.200161 | orchestrator | 2025-06-02 19:37:02.200176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:02.200245 | orchestrator | Monday 02 June 2025 19:37:02 +0000 (0:00:01.025) 0:00:09.524 *********** 2025-06-02 19:37:03.249444 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCm8Pnv2ywn30PShhIPLosTmdhkfmHE1elRNRfcUfhR1qs0zijYZlAE1KPvH9GGwEf32MQzFqpsShDc1J+6O6XvxWcV9cWex+bbNEsdBaYm7zcbR1VlOxmqiHWIKs9HtpuPGeLqzmitJj5JnLsDC9j2P4Y0yfFCTwy1rXrI8YPw5EsPGw05wPi3P1PMZUt+Fwuk0yPbnHO7CiCdgGlOSXtTxqfM5UcTSwrd3BWZZ0SlAs7/jXK5UDmCVVPcBdwhvzTuQKA02al4XgetabpGZN+g6o4Zi+sMDIkyAoOxnWvTAGE6oSb1qkstozlf10cR9xNm5pr2TDeyZvtHaYjFlZpJ5Xo1BbrClRG4Vr8u1UkBH9cmmxEP4RtVM+sYiz+hSpVqZ38vk7L1lEfb64ddStICNC7+htJ6sA4X5vHC2sFlxcXY+ZIcBNCnZe+tVUK4fecyEZUDPuLJHwv60inyyYm2d17wWgR2YUa4BRMcXXDOMdrMXn/LY3srJumXe2uV/1M=) 2025-06-02 19:37:03.250188 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJlPO1nevSP4iObTkhLK0cw5I2R4ahH8SkEXidrKtRTpqFr5x2C1zI+smbwHEC58uEB7IreB/ijoi4ncjZJr7v8=) 2025-06-02 19:37:03.250902 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO5kWhxoglmts3VBGHpAPytvl+8jIz7sHASVVib5OLGw) 2025-06-02 19:37:03.251670 | orchestrator | 2025-06-02 19:37:03.252175 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:03.252782 | orchestrator | Monday 02 June 2025 19:37:03 +0000 (0:00:01.048) 0:00:10.572 *********** 2025-06-02 19:37:04.254489 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDf0RvOCFAbe1Q5KX+qomzq3JA/ZwzORUMfSu/RmEuDO) 2025-06-02 19:37:04.255316 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQMDz494btaRwQGujexpfXBHx5294lvz9ca1M4Jjc93KcW7/ls2muQ4YMUeDAsBhHsIHgna+Ejjpd1ajannaON53zpV0OzMPZPvoPQNTyS9ZEwD/9mRduxzd4ytYU2DlyiEkBP4AuLS/y4sjVwHs/Qb/YaI9aswChVumJf8EWIlXZ7N89q7CDAYh2twUJo0cFqolLxbjaj4+PSg2viPcwabD24xYOoetVVAZNUMAZ2ULrVJIvskGIZ0VB2cXz6ZailYoHDtQlBpvC5VMhRIrjjqso5DZ71G1EspFtm5w/VMM/02CjDWERRlJAnEmWPwT0n6APDj7jNJdfuYte2Kbq3LfpcITSTJ2O6RcT7oPujN2Lc7GIzU6wXMdIi9dDinMj9U1+vDJNRoqm2zY/6G9mlQ3ZYwQiXQJYsPRGp1hrwniLAlMTZHC5e+c/bXGnoQDH+wYPg8801ofsi4yJPjxbfudG3CSe/2RgJO07OnpsBBJaoQ5JefGFMmDYcJc42SYE=) 2025-06-02 19:37:04.256876 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO5ZkXGVXgNPB5LZAYRXmtshyYpDa5O55kqUlJQKSeHAqXw/cIZ+Bj4OOi5a7JwCIUWGk6ngoZL2EGE37esyv4s=) 2025-06-02 19:37:04.257878 | orchestrator | 2025-06-02 19:37:04.258343 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:04.259241 | orchestrator | Monday 02 June 2025 19:37:04 +0000 (0:00:01.006) 0:00:11.579 *********** 2025-06-02 19:37:05.262815 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEslL0QKb3ilT0gfLNzkvgwhg7oa+yZE/PLTtyGdIleTciaLzhFpNWpAOA0u5WG/xYK364aJjUcmHFC+3eNvXv/we1bFDoDpTzrMW3m94tzyDiQdo09qlJN6KDrfqubl/eibRbyjz5268vyEUgspfI5L2eDWjFRMWAlDT/dxKebC3PsLZuRk5p8r+S9ZSacG7DBWPm6nGRwnUcM0fivPffGmhVU5o/KpX6SXuSHuQxMgwq1jgttGLmRYeM29flCpKZ56NWFHChK/H3aAf9/8YgN4nymMGEp5OqUQOGoNYgXaTgWppKtNisUrA8q2H30pq8he//kllVsPTPYK8tAgDC5FPYgYqJQBZqS8ry6x7/5JT/a10yKVn69pWjipKCVqRvy6QaaS/w5lpEkqezhAtYvzbJKi+883tLHalHiAhJsxJuRWgbw0mRnce8Fq69a3i37uBkuyHwQvGNWyiaIQ4P39rZ+kxwaf4krb08Ku/l4qSK7CBlJLmPF2AuFKvhL6M=) 2025-06-02 19:37:05.263023 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBaQD0PukEgHtWn5mUMfMMRJeY1/K6nJGfSfO/ny3cNfyLuPvVFn58k/BsazS7pmtjmuPQuGiq+ns55fy+OjffI=) 2025-06-02 19:37:05.263047 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILkOk+sp1xnblAMDp4Flv8EwrIat8pSLvYfSKffUP7/g) 2025-06-02 19:37:05.263585 | orchestrator | 2025-06-02 19:37:05.264098 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:05.264447 | orchestrator | Monday 02 June 2025 19:37:05 +0000 (0:00:01.007) 0:00:12.586 *********** 2025-06-02 19:37:06.326077 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDk//gacztyvCMu21etKTr3fLSho/7ORSSt57t/xCuk9GPK7l82AyLNGEywjf8L49MzhY3ZIvBAFmQtnT1EtuCqh6RTTZ/NvBMvCJxE5GUWh9UkOGRGLuwaaqkBNG/2CWaNvDjAuGrzuoBctf3nE28SkYb5Tsn9eAmJUiQGt3yL55zhn65Xv7COqYv9WFvCVu9fltYuX0q9R3CMIpIKG/yrL9YPCuYrAg1+PahiQoJuXmrJWQFGGOEDJgQkQUuG3lGWQxxb49j9YW/8BvxvHECTq+B0O2cPJnOfiq0WIzHpI/9MKrvDv4Jz/ZtxBK7hHOuxP3qlzF9laYeJ706iPpNu5XGnJJgz+GGw+q4K9iQdVtnT2kWYcGo/2K9hWU1l3f0uViX+z6Wvrd4lwnonRWTVDc3s5PtiJBPkX1qyeG8Tc5sWVvxd2NYiDrwUi4hvHOOWiLoRXRCZenFdOIYXIcJk5Ma0H0PQorNg7OP11hRGJ5Tg7kQWdDAACwsQ7g7gzlk=) 2025-06-02 19:37:06.327160 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJCAgjtNh5XC1HWvkaqOdl84UAIdd4xwqRMDUV0WwIcXAI6oobpiZYbUhLLWOR7aRUUoLsKU4Y1DA6MW1fZCxSo=) 2025-06-02 19:37:06.328051 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINePcJLJ63rvh5aIO1d7aOwnoyqE2WmKMl1/lxlubOZH) 2025-06-02 19:37:06.328803 | orchestrator | 2025-06-02 19:37:06.329499 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-02 19:37:06.330402 | orchestrator | Monday 02 June 2025 19:37:06 +0000 (0:00:01.064) 0:00:13.651 *********** 2025-06-02 19:37:11.485827 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 19:37:11.486196 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 19:37:11.487030 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 19:37:11.488799 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 19:37:11.489466 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 19:37:11.490124 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 19:37:11.490548 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 19:37:11.491141 | orchestrator | 2025-06-02 19:37:11.492019 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-02 19:37:11.492516 | orchestrator | Monday 02 June 2025 19:37:11 +0000 (0:00:05.159) 0:00:18.810 *********** 2025-06-02 19:37:11.645580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 19:37:11.645727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 19:37:11.646412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 19:37:11.647221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 19:37:11.647969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 19:37:11.648907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 19:37:11.649469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 19:37:11.650109 | orchestrator | 2025-06-02 19:37:11.650643 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:11.651036 | orchestrator | Monday 02 June 2025 19:37:11 +0000 (0:00:00.160) 0:00:18.971 *********** 2025-06-02 19:37:12.681816 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfI2oRZyoTXxJkalXSHXNJSPBRUyfp32ztXysPXuyFs) 2025-06-02 19:37:12.682311 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDC7zPdedr4Gtfv+5MB/IPNiGpfFuoXcrTeBwpSKZmu5Qy9xoiVfJ/xlfrGIsiDIE1qYl8NRHf49GuSSVKgdElo4KGNtqb0zR2v0XDQJWcxOFGvtmPAyO5qCzqpf5uftN9/LjDfFFW6IlVnt+qXV6cYa5Ito6VbmJz13guBE0TB+spwE1DFU9uPiIl2TmBq9XPlNKTZ0QEoB8jRRoEdZX0OEWz0RUZDp9+BnUC313cm9yD97ONwOUNmRmttgQVJKdRO8pHrF5CaYTYawAD7bMBEWGcKfJ05YGCqvaFKoRvU4DV0InfdjUC4e/moA6LtQ59cHO30JhjjCtkd9QCyZ7kSDqevr1fzmqEB5yK6opT6LYO6aeU/DgbsDtR33eoDRekTcyJWa/3Vj+dEDFUeRQ0yjjzXbLCjlLGPiect5FbzBiyBE66WreIvWjnS+Jk/v1iWUuILfaul3oxZ+hw+nWg3wKqfO72mcdsBgHS7eRKKjMrk24cWv5xyLUNRkJmVzi0=) 2025-06-02 19:37:12.683742 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMwWCflKm8U3pZj2qe72H4Cn1oTLpbTSRzEV+pZ2IxezI9VAJ1uVM5Yk8OBy2GnWTidmvd2oThbf35DGKbfl40k=) 2025-06-02 19:37:12.684338 | orchestrator | 2025-06-02 19:37:12.685419 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:12.685944 | orchestrator | Monday 02 June 2025 19:37:12 +0000 (0:00:01.034) 0:00:20.005 *********** 2025-06-02 19:37:13.774316 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrYBxr6cmxb+cgYqBa1nLtaX0pUZy6k3cX04EzdlDL5dYV/tkRUrOZwhJpiYHtFIkmaT8gWiYAfZ3DTAlnQRNg=) 2025-06-02 19:37:13.774428 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNv+la+q5WsHWXLVBSrrxxgFshGgkf90qCluW7N00NRKkNMTAHgfsLOUkTtpKKLiMNR7brrxpTXeh2jm/8jM0O/J056zIrYGcyfByVb6Jv7D6RmdWkzIg4Le4aCNx9PCnxwxPuPjWQOqts8/FeQjsuKy1zY2s2jpl2CUQWEQU8pBIOCrk+OnlMfQOgOIy4RYn9hPnfNCYog7YS4IA1EtL7m36ByRgns3prSlSk+9heAIHxSKJEUTh80ivqPRH7Vbihu4GRJ4bQ0ZbjZw4W57iZwtdG4QNRroGfPHd52sWSceLbIVcdvYzqiSkX7t7Y2meA0rGNnFjNdHSl4WF3ZZJfTWb/7PsxPrKKrknh5bAXwuC1bjQmV4Y32RStygJ2CoQ02AnWfAv5D6/oN2G/Ht37vwBaYPFkcypFtffVqx+nAaCZiIeqoKh+vw5IijFjLrE0QnRgInDC51JvUOIc4vojST3ETNVFSYRHVnIC3MmvKg0PHWpvoTPx26iEdNOqmeM=) 2025-06-02 19:37:13.774474 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMMWfJqYkXAyqyd9rkba6xkAPGAy1hBGcDC9KN5ABHer) 2025-06-02 19:37:13.775065 | orchestrator | 2025-06-02 19:37:13.776272 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:13.777634 | orchestrator | Monday 02 June 2025 19:37:13 +0000 (0:00:01.092) 0:00:21.098 *********** 2025-06-02 19:37:14.798832 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO2oYeM0bZP34miPB8I3laHTY4mihbXEO0BEUANSpRpC5uUYcaNAeKiKFX9by3r2A76Kg1Sd9+n3JGGnqbyxSM4=) 2025-06-02 19:37:14.799168 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4lCGrd4LXNokfvPoIC7wogtoS7BPjHnfZYigMc6WzxlaBJtgC0W+K+3OHOH/xUfaf4yfXcsOlv0Bdu4zPV+0ezDgXvqgnaBqTnjS5o524Kob5LOwLrqp8fKo4OxL8mKWqm5d1sCdWG017mFsfpcvtmbCWHci4I1SqLxCWB9iqDtTfUXt/yNgAshdk3knOM8Wvh7z5hKGVJqdvVTq41xlJRjfxY8CecbaGMYMe4gLiKp8XwSapdR67XvW7L8lPw+W8YbCtXYXLNjwXR3rE0idemO1w4TE0M4zcKkt9cxHRQXGA/qb1g407fzfwaGVi+6nUuuCsUnue5Z00qF7JjUvpQ92sMISpk2guvwfLsMJIsfXLEjoI99Kl+v6R/V2cALCqNP8BaRaoMsHV6bTx6qCwUnwyfaU2sdR+XXrxHGJ17Z7Ria8mensYX7RdN3lf0FxIEbQh4fMVvqPMBL3+92INNWO4Cuz8royMCpnPgFPiXGa59PdCbWUYNJzQ/A+I4HU=) 2025-06-02 19:37:14.799983 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINvrmpuK5xFVbGGGkUOgZAVGXK2+qWfegpXPulW/zQOF) 2025-06-02 19:37:14.800375 | orchestrator | 2025-06-02 19:37:14.801271 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:14.801925 | orchestrator | Monday 02 June 2025 19:37:14 +0000 (0:00:01.025) 0:00:22.124 *********** 2025-06-02 19:37:15.840535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJlPO1nevSP4iObTkhLK0cw5I2R4ahH8SkEXidrKtRTpqFr5x2C1zI+smbwHEC58uEB7IreB/ijoi4ncjZJr7v8=) 2025-06-02 19:37:15.840911 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCm8Pnv2ywn30PShhIPLosTmdhkfmHE1elRNRfcUfhR1qs0zijYZlAE1KPvH9GGwEf32MQzFqpsShDc1J+6O6XvxWcV9cWex+bbNEsdBaYm7zcbR1VlOxmqiHWIKs9HtpuPGeLqzmitJj5JnLsDC9j2P4Y0yfFCTwy1rXrI8YPw5EsPGw05wPi3P1PMZUt+Fwuk0yPbnHO7CiCdgGlOSXtTxqfM5UcTSwrd3BWZZ0SlAs7/jXK5UDmCVVPcBdwhvzTuQKA02al4XgetabpGZN+g6o4Zi+sMDIkyAoOxnWvTAGE6oSb1qkstozlf10cR9xNm5pr2TDeyZvtHaYjFlZpJ5Xo1BbrClRG4Vr8u1UkBH9cmmxEP4RtVM+sYiz+hSpVqZ38vk7L1lEfb64ddStICNC7+htJ6sA4X5vHC2sFlxcXY+ZIcBNCnZe+tVUK4fecyEZUDPuLJHwv60inyyYm2d17wWgR2YUa4BRMcXXDOMdrMXn/LY3srJumXe2uV/1M=) 2025-06-02 19:37:15.842230 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO5kWhxoglmts3VBGHpAPytvl+8jIz7sHASVVib5OLGw) 2025-06-02 19:37:15.842977 | orchestrator | 2025-06-02 19:37:15.843880 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:15.844383 | orchestrator | Monday 02 June 2025 19:37:15 +0000 (0:00:01.041) 0:00:23.165 *********** 2025-06-02 19:37:16.869299 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO5ZkXGVXgNPB5LZAYRXmtshyYpDa5O55kqUlJQKSeHAqXw/cIZ+Bj4OOi5a7JwCIUWGk6ngoZL2EGE37esyv4s=) 2025-06-02 19:37:16.869418 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQMDz494btaRwQGujexpfXBHx5294lvz9ca1M4Jjc93KcW7/ls2muQ4YMUeDAsBhHsIHgna+Ejjpd1ajannaON53zpV0OzMPZPvoPQNTyS9ZEwD/9mRduxzd4ytYU2DlyiEkBP4AuLS/y4sjVwHs/Qb/YaI9aswChVumJf8EWIlXZ7N89q7CDAYh2twUJo0cFqolLxbjaj4+PSg2viPcwabD24xYOoetVVAZNUMAZ2ULrVJIvskGIZ0VB2cXz6ZailYoHDtQlBpvC5VMhRIrjjqso5DZ71G1EspFtm5w/VMM/02CjDWERRlJAnEmWPwT0n6APDj7jNJdfuYte2Kbq3LfpcITSTJ2O6RcT7oPujN2Lc7GIzU6wXMdIi9dDinMj9U1+vDJNRoqm2zY/6G9mlQ3ZYwQiXQJYsPRGp1hrwniLAlMTZHC5e+c/bXGnoQDH+wYPg8801ofsi4yJPjxbfudG3CSe/2RgJO07OnpsBBJaoQ5JefGFMmDYcJc42SYE=) 2025-06-02 19:37:16.869460 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDf0RvOCFAbe1Q5KX+qomzq3JA/ZwzORUMfSu/RmEuDO) 2025-06-02 19:37:16.869498 | orchestrator | 2025-06-02 19:37:16.869649 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:16.870363 | orchestrator | Monday 02 June 2025 19:37:16 +0000 (0:00:01.027) 0:00:24.193 *********** 2025-06-02 19:37:17.902273 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEslL0QKb3ilT0gfLNzkvgwhg7oa+yZE/PLTtyGdIleTciaLzhFpNWpAOA0u5WG/xYK364aJjUcmHFC+3eNvXv/we1bFDoDpTzrMW3m94tzyDiQdo09qlJN6KDrfqubl/eibRbyjz5268vyEUgspfI5L2eDWjFRMWAlDT/dxKebC3PsLZuRk5p8r+S9ZSacG7DBWPm6nGRwnUcM0fivPffGmhVU5o/KpX6SXuSHuQxMgwq1jgttGLmRYeM29flCpKZ56NWFHChK/H3aAf9/8YgN4nymMGEp5OqUQOGoNYgXaTgWppKtNisUrA8q2H30pq8he//kllVsPTPYK8tAgDC5FPYgYqJQBZqS8ry6x7/5JT/a10yKVn69pWjipKCVqRvy6QaaS/w5lpEkqezhAtYvzbJKi+883tLHalHiAhJsxJuRWgbw0mRnce8Fq69a3i37uBkuyHwQvGNWyiaIQ4P39rZ+kxwaf4krb08Ku/l4qSK7CBlJLmPF2AuFKvhL6M=) 2025-06-02 19:37:17.903422 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBaQD0PukEgHtWn5mUMfMMRJeY1/K6nJGfSfO/ny3cNfyLuPvVFn58k/BsazS7pmtjmuPQuGiq+ns55fy+OjffI=) 2025-06-02 19:37:17.904258 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILkOk+sp1xnblAMDp4Flv8EwrIat8pSLvYfSKffUP7/g) 2025-06-02 19:37:17.905274 | orchestrator | 2025-06-02 19:37:17.906204 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:37:17.907196 | orchestrator | Monday 02 June 2025 19:37:17 +0000 (0:00:01.034) 0:00:25.227 *********** 2025-06-02 19:37:18.927486 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINePcJLJ63rvh5aIO1d7aOwnoyqE2WmKMl1/lxlubOZH) 2025-06-02 19:37:18.927771 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDk//gacztyvCMu21etKTr3fLSho/7ORSSt57t/xCuk9GPK7l82AyLNGEywjf8L49MzhY3ZIvBAFmQtnT1EtuCqh6RTTZ/NvBMvCJxE5GUWh9UkOGRGLuwaaqkBNG/2CWaNvDjAuGrzuoBctf3nE28SkYb5Tsn9eAmJUiQGt3yL55zhn65Xv7COqYv9WFvCVu9fltYuX0q9R3CMIpIKG/yrL9YPCuYrAg1+PahiQoJuXmrJWQFGGOEDJgQkQUuG3lGWQxxb49j9YW/8BvxvHECTq+B0O2cPJnOfiq0WIzHpI/9MKrvDv4Jz/ZtxBK7hHOuxP3qlzF9laYeJ706iPpNu5XGnJJgz+GGw+q4K9iQdVtnT2kWYcGo/2K9hWU1l3f0uViX+z6Wvrd4lwnonRWTVDc3s5PtiJBPkX1qyeG8Tc5sWVvxd2NYiDrwUi4hvHOOWiLoRXRCZenFdOIYXIcJk5Ma0H0PQorNg7OP11hRGJ5Tg7kQWdDAACwsQ7g7gzlk=) 2025-06-02 19:37:18.928998 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJCAgjtNh5XC1HWvkaqOdl84UAIdd4xwqRMDUV0WwIcXAI6oobpiZYbUhLLWOR7aRUUoLsKU4Y1DA6MW1fZCxSo=) 2025-06-02 19:37:18.929162 | orchestrator | 2025-06-02 19:37:18.929536 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-02 19:37:18.930003 | orchestrator | Monday 02 June 2025 19:37:18 +0000 (0:00:01.022) 0:00:26.250 *********** 2025-06-02 19:37:19.081043 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 19:37:19.082307 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 19:37:19.082948 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 19:37:19.084001 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 19:37:19.084889 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 19:37:19.085289 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 19:37:19.086297 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 19:37:19.086670 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:37:19.087848 | orchestrator | 2025-06-02 19:37:19.088350 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-02 19:37:19.088829 | orchestrator | Monday 02 June 2025 19:37:19 +0000 (0:00:00.156) 0:00:26.406 *********** 2025-06-02 19:37:19.146948 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:37:19.147950 | orchestrator | 2025-06-02 19:37:19.149082 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-02 19:37:19.150135 | orchestrator | Monday 02 June 2025 19:37:19 +0000 (0:00:00.065) 0:00:26.472 *********** 2025-06-02 19:37:19.196891 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:37:19.197322 | orchestrator | 2025-06-02 19:37:19.198327 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-02 19:37:19.198934 | orchestrator | Monday 02 June 2025 19:37:19 +0000 (0:00:00.051) 0:00:26.523 *********** 2025-06-02 19:37:19.670559 | orchestrator | changed: [testbed-manager] 2025-06-02 19:37:19.670853 | orchestrator | 2025-06-02 19:37:19.672700 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:37:19.672824 | orchestrator | 2025-06-02 19:37:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:37:19.673227 | orchestrator | 2025-06-02 19:37:19 | INFO  | Please wait and do not abort execution. 2025-06-02 19:37:19.673725 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 19:37:19.675198 | orchestrator | 2025-06-02 19:37:19.675640 | orchestrator | 2025-06-02 19:37:19.676705 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:37:19.677060 | orchestrator | Monday 02 June 2025 19:37:19 +0000 (0:00:00.472) 0:00:26.996 *********** 2025-06-02 19:37:19.677661 | orchestrator | =============================================================================== 2025-06-02 19:37:19.678280 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.93s 2025-06-02 19:37:19.679005 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.16s 2025-06-02 19:37:19.679520 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-06-02 19:37:19.680159 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-02 19:37:19.680793 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-02 19:37:19.681424 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-02 19:37:19.682199 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-02 19:37:19.682645 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-02 19:37:19.684034 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 19:37:19.684417 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 19:37:19.685273 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 19:37:19.686148 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 19:37:19.686948 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 19:37:19.687516 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-02 19:37:19.688244 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-02 19:37:19.689038 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-02 19:37:19.689569 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.47s 2025-06-02 19:37:19.690249 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-06-02 19:37:19.690813 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-02 19:37:19.691378 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-06-02 19:37:20.106438 | orchestrator | + osism apply squid 2025-06-02 19:37:21.749668 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:37:21.749811 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:37:21.749826 | orchestrator | Registering Redlock._release_script 2025-06-02 19:37:21.814541 | orchestrator | 2025-06-02 19:37:21 | INFO  | Task d82c66e2-5ccc-4535-9a19-cd0443117730 (squid) was prepared for execution. 2025-06-02 19:37:21.814663 | orchestrator | 2025-06-02 19:37:21 | INFO  | It takes a moment until task d82c66e2-5ccc-4535-9a19-cd0443117730 (squid) has been started and output is visible here. 2025-06-02 19:37:25.678575 | orchestrator | 2025-06-02 19:37:25.678738 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-02 19:37:25.678757 | orchestrator | 2025-06-02 19:37:25.679181 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-02 19:37:25.679594 | orchestrator | Monday 02 June 2025 19:37:25 +0000 (0:00:00.162) 0:00:00.162 *********** 2025-06-02 19:37:25.761636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 19:37:25.761866 | orchestrator | 2025-06-02 19:37:25.763011 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-02 19:37:25.764095 | orchestrator | Monday 02 June 2025 19:37:25 +0000 (0:00:00.085) 0:00:00.247 *********** 2025-06-02 19:37:27.102839 | orchestrator | ok: [testbed-manager] 2025-06-02 19:37:27.102954 | orchestrator | 2025-06-02 19:37:27.102970 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-02 19:37:27.102983 | orchestrator | Monday 02 June 2025 19:37:27 +0000 (0:00:01.339) 0:00:01.587 *********** 2025-06-02 19:37:28.228101 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-02 19:37:28.230371 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-02 19:37:28.230405 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-02 19:37:28.231249 | orchestrator | 2025-06-02 19:37:28.231875 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-02 19:37:28.232476 | orchestrator | Monday 02 June 2025 19:37:28 +0000 (0:00:01.125) 0:00:02.712 *********** 2025-06-02 19:37:29.277306 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-02 19:37:29.278382 | orchestrator | 2025-06-02 19:37:29.278851 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-02 19:37:29.279880 | orchestrator | Monday 02 June 2025 19:37:29 +0000 (0:00:01.049) 0:00:03.762 *********** 2025-06-02 19:37:29.617425 | orchestrator | ok: [testbed-manager] 2025-06-02 19:37:29.617540 | orchestrator | 2025-06-02 19:37:29.617557 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-02 19:37:29.618262 | orchestrator | Monday 02 June 2025 19:37:29 +0000 (0:00:00.338) 0:00:04.100 *********** 2025-06-02 19:37:30.508975 | orchestrator | changed: [testbed-manager] 2025-06-02 19:37:30.509525 | orchestrator | 2025-06-02 19:37:30.510424 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-02 19:37:30.511004 | orchestrator | Monday 02 June 2025 19:37:30 +0000 (0:00:00.892) 0:00:04.992 *********** 2025-06-02 19:38:01.620859 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-02 19:38:01.620981 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:01.621069 | orchestrator | 2025-06-02 19:38:01.624183 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-02 19:38:01.624610 | orchestrator | Monday 02 June 2025 19:38:01 +0000 (0:00:31.110) 0:00:36.103 *********** 2025-06-02 19:38:14.084026 | orchestrator | changed: [testbed-manager] 2025-06-02 19:38:14.084225 | orchestrator | 2025-06-02 19:38:14.084359 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-02 19:38:14.084489 | orchestrator | Monday 02 June 2025 19:38:14 +0000 (0:00:12.465) 0:00:48.568 *********** 2025-06-02 19:39:14.156558 | orchestrator | Pausing for 60 seconds 2025-06-02 19:39:14.156793 | orchestrator | changed: [testbed-manager] 2025-06-02 19:39:14.156816 | orchestrator | 2025-06-02 19:39:14.156917 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-02 19:39:14.157047 | orchestrator | Monday 02 June 2025 19:39:14 +0000 (0:01:00.070) 0:01:48.638 *********** 2025-06-02 19:39:14.217836 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:14.218090 | orchestrator | 2025-06-02 19:39:14.219389 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-02 19:39:14.220761 | orchestrator | Monday 02 June 2025 19:39:14 +0000 (0:00:00.064) 0:01:48.703 *********** 2025-06-02 19:39:14.864102 | orchestrator | changed: [testbed-manager] 2025-06-02 19:39:14.864484 | orchestrator | 2025-06-02 19:39:14.865411 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:39:14.866189 | orchestrator | 2025-06-02 19:39:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:39:14.866259 | orchestrator | 2025-06-02 19:39:14 | INFO  | Please wait and do not abort execution. 2025-06-02 19:39:14.867626 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:39:14.868306 | orchestrator | 2025-06-02 19:39:14.869330 | orchestrator | 2025-06-02 19:39:14.870099 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:39:14.871183 | orchestrator | Monday 02 June 2025 19:39:14 +0000 (0:00:00.645) 0:01:49.348 *********** 2025-06-02 19:39:14.871688 | orchestrator | =============================================================================== 2025-06-02 19:39:14.872836 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-02 19:39:14.873750 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.11s 2025-06-02 19:39:14.874777 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.47s 2025-06-02 19:39:14.875740 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.34s 2025-06-02 19:39:14.876329 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2025-06-02 19:39:14.877098 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2025-06-02 19:39:14.877859 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.89s 2025-06-02 19:39:14.878636 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-06-02 19:39:14.879347 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2025-06-02 19:39:14.880501 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-06-02 19:39:14.881479 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-06-02 19:39:15.370762 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 19:39:15.370989 | orchestrator | ++ semver latest 9.0.0 2025-06-02 19:39:15.431301 | orchestrator | + [[ -1 -lt 0 ]] 2025-06-02 19:39:15.431388 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 19:39:15.431403 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-02 19:39:17.065500 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:39:17.065601 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:39:17.065616 | orchestrator | Registering Redlock._release_script 2025-06-02 19:39:17.121714 | orchestrator | 2025-06-02 19:39:17 | INFO  | Task dee007d4-9067-4b56-a24f-0d3e58d941bd (operator) was prepared for execution. 2025-06-02 19:39:17.121828 | orchestrator | 2025-06-02 19:39:17 | INFO  | It takes a moment until task dee007d4-9067-4b56-a24f-0d3e58d941bd (operator) has been started and output is visible here. 2025-06-02 19:39:20.967897 | orchestrator | 2025-06-02 19:39:20.968906 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-02 19:39:20.971398 | orchestrator | 2025-06-02 19:39:20.971807 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:39:20.971830 | orchestrator | Monday 02 June 2025 19:39:20 +0000 (0:00:00.146) 0:00:00.146 *********** 2025-06-02 19:39:24.314744 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:24.315493 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:24.317399 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:24.317704 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:24.318804 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:24.319327 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:24.320371 | orchestrator | 2025-06-02 19:39:24.321132 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-02 19:39:24.321827 | orchestrator | Monday 02 June 2025 19:39:24 +0000 (0:00:03.348) 0:00:03.494 *********** 2025-06-02 19:39:25.074218 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:25.074327 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:25.074343 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:25.076969 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:25.077692 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:25.078398 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:25.079019 | orchestrator | 2025-06-02 19:39:25.080036 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-02 19:39:25.080560 | orchestrator | 2025-06-02 19:39:25.081087 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 19:39:25.081533 | orchestrator | Monday 02 June 2025 19:39:25 +0000 (0:00:00.755) 0:00:04.250 *********** 2025-06-02 19:39:25.146536 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:25.171106 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:25.195165 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:25.231988 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:25.232272 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:25.233876 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:25.237746 | orchestrator | 2025-06-02 19:39:25.237794 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 19:39:25.237808 | orchestrator | Monday 02 June 2025 19:39:25 +0000 (0:00:00.162) 0:00:04.412 *********** 2025-06-02 19:39:25.296889 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:25.319247 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:25.356329 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:25.417828 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:25.418370 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:25.418807 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:25.422203 | orchestrator | 2025-06-02 19:39:25.422675 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 19:39:25.422965 | orchestrator | Monday 02 June 2025 19:39:25 +0000 (0:00:00.184) 0:00:04.597 *********** 2025-06-02 19:39:25.993767 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:25.994728 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:25.996399 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:25.996902 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:25.997582 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:25.998061 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:26.001023 | orchestrator | 2025-06-02 19:39:26.001047 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 19:39:26.001055 | orchestrator | Monday 02 June 2025 19:39:25 +0000 (0:00:00.575) 0:00:05.173 *********** 2025-06-02 19:39:26.800095 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:26.800199 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:26.800478 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:26.800950 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:26.802145 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:26.802809 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:26.803998 | orchestrator | 2025-06-02 19:39:26.804515 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 19:39:26.804903 | orchestrator | Monday 02 June 2025 19:39:26 +0000 (0:00:00.805) 0:00:05.978 *********** 2025-06-02 19:39:27.946198 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-02 19:39:27.946355 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-02 19:39:27.947773 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-02 19:39:27.948971 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-02 19:39:27.949768 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-02 19:39:27.950825 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-02 19:39:27.951767 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-02 19:39:27.952863 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-02 19:39:27.952901 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-02 19:39:27.953969 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-02 19:39:27.954858 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-02 19:39:27.955591 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-02 19:39:27.956219 | orchestrator | 2025-06-02 19:39:27.957273 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 19:39:27.957686 | orchestrator | Monday 02 June 2025 19:39:27 +0000 (0:00:01.146) 0:00:07.125 *********** 2025-06-02 19:39:29.183131 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:29.183326 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:29.183774 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:29.184383 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:29.184771 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:29.185552 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:29.185836 | orchestrator | 2025-06-02 19:39:29.186490 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 19:39:29.186802 | orchestrator | Monday 02 June 2025 19:39:29 +0000 (0:00:01.236) 0:00:08.361 *********** 2025-06-02 19:39:30.379478 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-02 19:39:30.379801 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-02 19:39:30.380700 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-02 19:39:30.454435 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:39:30.454533 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:39:30.454547 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:39:30.455559 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:39:30.456340 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:39:30.457081 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:39:30.457796 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-02 19:39:30.458463 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-02 19:39:30.459106 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-02 19:39:30.460014 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-02 19:39:30.460275 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-02 19:39:30.461392 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-02 19:39:30.462274 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:39:30.462799 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:39:30.463502 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:39:30.463864 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:39:30.464269 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:39:30.465318 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:39:30.465605 | orchestrator | 2025-06-02 19:39:30.466239 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 19:39:30.466964 | orchestrator | Monday 02 June 2025 19:39:30 +0000 (0:00:01.271) 0:00:09.633 *********** 2025-06-02 19:39:31.011098 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:31.011237 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:31.011254 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:31.011265 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:31.011363 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:31.011595 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:31.011825 | orchestrator | 2025-06-02 19:39:31.012413 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 19:39:31.012571 | orchestrator | Monday 02 June 2025 19:39:31 +0000 (0:00:00.558) 0:00:10.191 *********** 2025-06-02 19:39:31.097993 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:39:31.128241 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:39:31.194971 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:39:31.195899 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:39:31.198687 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:39:31.198726 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:39:31.199571 | orchestrator | 2025-06-02 19:39:31.200390 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 19:39:31.201401 | orchestrator | Monday 02 June 2025 19:39:31 +0000 (0:00:00.182) 0:00:10.374 *********** 2025-06-02 19:39:31.890992 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 19:39:31.893772 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 19:39:31.893806 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 19:39:31.893818 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 19:39:31.893866 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:31.893880 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:31.893892 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:31.893903 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:31.893964 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 19:39:31.894189 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 19:39:31.894627 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:31.894896 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:31.895223 | orchestrator | 2025-06-02 19:39:31.895524 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 19:39:31.895843 | orchestrator | Monday 02 June 2025 19:39:31 +0000 (0:00:00.695) 0:00:11.070 *********** 2025-06-02 19:39:31.946840 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:39:31.973755 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:39:31.994322 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:39:32.055496 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:39:32.055980 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:39:32.057802 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:39:32.057920 | orchestrator | 2025-06-02 19:39:32.058658 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 19:39:32.059175 | orchestrator | Monday 02 June 2025 19:39:32 +0000 (0:00:00.165) 0:00:11.235 *********** 2025-06-02 19:39:32.102621 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:39:32.144965 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:39:32.161168 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:39:32.187024 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:39:32.187457 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:39:32.189096 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:39:32.189873 | orchestrator | 2025-06-02 19:39:32.190904 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 19:39:32.191128 | orchestrator | Monday 02 June 2025 19:39:32 +0000 (0:00:00.132) 0:00:11.368 *********** 2025-06-02 19:39:32.228615 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:39:32.246983 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:39:32.290070 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:39:32.319871 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:39:32.320599 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:39:32.321752 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:39:32.322494 | orchestrator | 2025-06-02 19:39:32.323759 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 19:39:32.323856 | orchestrator | Monday 02 June 2025 19:39:32 +0000 (0:00:00.130) 0:00:11.498 *********** 2025-06-02 19:39:32.982616 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:32.986012 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:32.986140 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:32.986153 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:32.986164 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:32.986174 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:32.986614 | orchestrator | 2025-06-02 19:39:32.987389 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 19:39:32.987923 | orchestrator | Monday 02 June 2025 19:39:32 +0000 (0:00:00.661) 0:00:12.160 *********** 2025-06-02 19:39:33.063340 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:39:33.097835 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:39:33.118752 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:39:33.236399 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:39:33.237512 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:39:33.238196 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:39:33.239266 | orchestrator | 2025-06-02 19:39:33.239959 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:39:33.240565 | orchestrator | 2025-06-02 19:39:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:39:33.240586 | orchestrator | 2025-06-02 19:39:33 | INFO  | Please wait and do not abort execution. 2025-06-02 19:39:33.241741 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:39:33.242754 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:39:33.243538 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:39:33.243955 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:39:33.245000 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:39:33.245371 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:39:33.246115 | orchestrator | 2025-06-02 19:39:33.246885 | orchestrator | 2025-06-02 19:39:33.247508 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:39:33.248213 | orchestrator | Monday 02 June 2025 19:39:33 +0000 (0:00:00.256) 0:00:12.416 *********** 2025-06-02 19:39:33.248946 | orchestrator | =============================================================================== 2025-06-02 19:39:33.249828 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2025-06-02 19:39:33.250321 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2025-06-02 19:39:33.250963 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.24s 2025-06-02 19:39:33.251579 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-06-02 19:39:33.252536 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2025-06-02 19:39:33.253115 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-06-02 19:39:33.253890 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-06-02 19:39:33.254577 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2025-06-02 19:39:33.255303 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.58s 2025-06-02 19:39:33.256139 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-06-02 19:39:33.256751 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2025-06-02 19:39:33.257374 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-06-02 19:39:33.257912 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-06-02 19:39:33.258419 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-06-02 19:39:33.259048 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-06-02 19:39:33.259513 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2025-06-02 19:39:33.260235 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2025-06-02 19:39:33.782804 | orchestrator | + osism apply --environment custom facts 2025-06-02 19:39:35.399486 | orchestrator | 2025-06-02 19:39:35 | INFO  | Trying to run play facts in environment custom 2025-06-02 19:39:35.403191 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:39:35.403228 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:39:35.403235 | orchestrator | Registering Redlock._release_script 2025-06-02 19:39:35.478242 | orchestrator | 2025-06-02 19:39:35 | INFO  | Task 85fc1fc5-4b23-47a8-b08a-ea6852789689 (facts) was prepared for execution. 2025-06-02 19:39:35.478351 | orchestrator | 2025-06-02 19:39:35 | INFO  | It takes a moment until task 85fc1fc5-4b23-47a8-b08a-ea6852789689 (facts) has been started and output is visible here. 2025-06-02 19:39:39.543906 | orchestrator | 2025-06-02 19:39:39.544325 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-02 19:39:39.548037 | orchestrator | 2025-06-02 19:39:39.548953 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 19:39:39.550573 | orchestrator | Monday 02 June 2025 19:39:39 +0000 (0:00:00.091) 0:00:00.091 *********** 2025-06-02 19:39:41.082373 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:41.082923 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:41.084763 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:41.084832 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:41.085402 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:41.087887 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:41.087941 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:41.087954 | orchestrator | 2025-06-02 19:39:41.088984 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-02 19:39:41.089343 | orchestrator | Monday 02 June 2025 19:39:41 +0000 (0:00:01.538) 0:00:01.629 *********** 2025-06-02 19:39:42.317574 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:42.318202 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:42.319170 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:42.319235 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:42.319950 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:42.322581 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:42.322939 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:42.324102 | orchestrator | 2025-06-02 19:39:42.325219 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-02 19:39:42.326132 | orchestrator | 2025-06-02 19:39:42.326739 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 19:39:42.327655 | orchestrator | Monday 02 June 2025 19:39:42 +0000 (0:00:01.236) 0:00:02.865 *********** 2025-06-02 19:39:42.454418 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:42.454510 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:42.454560 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:42.456000 | orchestrator | 2025-06-02 19:39:42.456563 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 19:39:42.457735 | orchestrator | Monday 02 June 2025 19:39:42 +0000 (0:00:00.135) 0:00:03.001 *********** 2025-06-02 19:39:42.683771 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:42.685385 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:42.686808 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:42.687268 | orchestrator | 2025-06-02 19:39:42.688455 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 19:39:42.689848 | orchestrator | Monday 02 June 2025 19:39:42 +0000 (0:00:00.231) 0:00:03.232 *********** 2025-06-02 19:39:42.912040 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:42.912238 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:42.913247 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:42.913999 | orchestrator | 2025-06-02 19:39:42.914530 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 19:39:42.915082 | orchestrator | Monday 02 June 2025 19:39:42 +0000 (0:00:00.229) 0:00:03.462 *********** 2025-06-02 19:39:43.080585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:39:43.081016 | orchestrator | 2025-06-02 19:39:43.081677 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 19:39:43.082536 | orchestrator | Monday 02 June 2025 19:39:43 +0000 (0:00:00.167) 0:00:03.629 *********** 2025-06-02 19:39:43.533236 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:43.533408 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:43.534307 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:43.534934 | orchestrator | 2025-06-02 19:39:43.535753 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 19:39:43.536663 | orchestrator | Monday 02 June 2025 19:39:43 +0000 (0:00:00.452) 0:00:04.081 *********** 2025-06-02 19:39:43.642523 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:39:43.643659 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:39:43.644422 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:39:43.645084 | orchestrator | 2025-06-02 19:39:43.645768 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 19:39:43.646405 | orchestrator | Monday 02 June 2025 19:39:43 +0000 (0:00:00.110) 0:00:04.191 *********** 2025-06-02 19:39:44.697227 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:44.698747 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:44.700938 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:44.702232 | orchestrator | 2025-06-02 19:39:44.702904 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 19:39:44.704021 | orchestrator | Monday 02 June 2025 19:39:44 +0000 (0:00:01.053) 0:00:05.245 *********** 2025-06-02 19:39:45.173396 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:45.174711 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:45.175929 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:45.177571 | orchestrator | 2025-06-02 19:39:45.179382 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 19:39:45.179453 | orchestrator | Monday 02 June 2025 19:39:45 +0000 (0:00:00.477) 0:00:05.722 *********** 2025-06-02 19:39:46.221355 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:46.223558 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:46.224162 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:46.225017 | orchestrator | 2025-06-02 19:39:46.226251 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 19:39:46.227353 | orchestrator | Monday 02 June 2025 19:39:46 +0000 (0:00:01.044) 0:00:06.766 *********** 2025-06-02 19:39:59.542722 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:59.542810 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:59.542817 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:59.543752 | orchestrator | 2025-06-02 19:39:59.546863 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-02 19:39:59.547594 | orchestrator | Monday 02 June 2025 19:39:59 +0000 (0:00:13.315) 0:00:20.082 *********** 2025-06-02 19:39:59.588452 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:39:59.635142 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:39:59.639461 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:39:59.639539 | orchestrator | 2025-06-02 19:39:59.639602 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-02 19:39:59.641285 | orchestrator | Monday 02 June 2025 19:39:59 +0000 (0:00:00.101) 0:00:20.183 *********** 2025-06-02 19:40:06.419163 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:40:06.421018 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:40:06.421051 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:40:06.421381 | orchestrator | 2025-06-02 19:40:06.422371 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 19:40:06.423913 | orchestrator | Monday 02 June 2025 19:40:06 +0000 (0:00:06.784) 0:00:26.967 *********** 2025-06-02 19:40:06.917184 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:06.918918 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:06.918963 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:06.920073 | orchestrator | 2025-06-02 19:40:06.921148 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 19:40:06.922126 | orchestrator | Monday 02 June 2025 19:40:06 +0000 (0:00:00.497) 0:00:27.465 *********** 2025-06-02 19:40:10.364689 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-02 19:40:10.364802 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-02 19:40:10.365005 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-02 19:40:10.366321 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-02 19:40:10.367554 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-02 19:40:10.373186 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-02 19:40:10.373771 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-02 19:40:10.374892 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-02 19:40:10.375693 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-02 19:40:10.376883 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-02 19:40:10.377190 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-02 19:40:10.378280 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-02 19:40:10.378864 | orchestrator | 2025-06-02 19:40:10.379525 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 19:40:10.380166 | orchestrator | Monday 02 June 2025 19:40:10 +0000 (0:00:03.445) 0:00:30.911 *********** 2025-06-02 19:40:11.585285 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:11.587696 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:11.587743 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:11.587752 | orchestrator | 2025-06-02 19:40:11.588708 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 19:40:11.589250 | orchestrator | 2025-06-02 19:40:11.590093 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:40:11.590739 | orchestrator | Monday 02 June 2025 19:40:11 +0000 (0:00:01.222) 0:00:32.133 *********** 2025-06-02 19:40:15.365294 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:15.365430 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:15.365448 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:15.365530 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:15.366294 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:15.366337 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:15.366427 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:15.367923 | orchestrator | 2025-06-02 19:40:15.368694 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:40:15.368738 | orchestrator | 2025-06-02 19:40:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:40:15.368753 | orchestrator | 2025-06-02 19:40:15 | INFO  | Please wait and do not abort execution. 2025-06-02 19:40:15.369270 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:40:15.369364 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:40:15.369683 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:40:15.370133 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:40:15.371352 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:40:15.372373 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:40:15.373201 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:40:15.373942 | orchestrator | 2025-06-02 19:40:15.374479 | orchestrator | 2025-06-02 19:40:15.375283 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:40:15.375733 | orchestrator | Monday 02 June 2025 19:40:15 +0000 (0:00:03.782) 0:00:35.916 *********** 2025-06-02 19:40:15.376586 | orchestrator | =============================================================================== 2025-06-02 19:40:15.376875 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.32s 2025-06-02 19:40:15.377695 | orchestrator | Install required packages (Debian) -------------------------------------- 6.78s 2025-06-02 19:40:15.378313 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.78s 2025-06-02 19:40:15.378611 | orchestrator | Copy fact files --------------------------------------------------------- 3.45s 2025-06-02 19:40:15.378839 | orchestrator | Create custom facts directory ------------------------------------------- 1.54s 2025-06-02 19:40:15.379129 | orchestrator | Copy fact file ---------------------------------------------------------- 1.24s 2025-06-02 19:40:15.379399 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.22s 2025-06-02 19:40:15.379831 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-06-02 19:40:15.380009 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2025-06-02 19:40:15.380427 | orchestrator | Create custom facts directory ------------------------------------------- 0.50s 2025-06-02 19:40:15.381133 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-06-02 19:40:15.381535 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-06-02 19:40:15.382140 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2025-06-02 19:40:15.382590 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2025-06-02 19:40:15.383892 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-06-02 19:40:15.384916 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2025-06-02 19:40:15.385801 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-06-02 19:40:15.386441 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-06-02 19:40:15.835327 | orchestrator | + osism apply bootstrap 2025-06-02 19:40:17.474449 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:40:17.474559 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:40:17.474575 | orchestrator | Registering Redlock._release_script 2025-06-02 19:40:17.534297 | orchestrator | 2025-06-02 19:40:17 | INFO  | Task be1afa7d-8c59-45d5-ae9b-1225791949b5 (bootstrap) was prepared for execution. 2025-06-02 19:40:17.534402 | orchestrator | 2025-06-02 19:40:17 | INFO  | It takes a moment until task be1afa7d-8c59-45d5-ae9b-1225791949b5 (bootstrap) has been started and output is visible here. 2025-06-02 19:40:21.354261 | orchestrator | 2025-06-02 19:40:21.354425 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-02 19:40:21.354452 | orchestrator | 2025-06-02 19:40:21.355200 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-02 19:40:21.355920 | orchestrator | Monday 02 June 2025 19:40:21 +0000 (0:00:00.119) 0:00:00.119 *********** 2025-06-02 19:40:21.420505 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:21.438068 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:21.460451 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:21.533076 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:21.534683 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:21.534732 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:21.535342 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:21.536963 | orchestrator | 2025-06-02 19:40:21.537759 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 19:40:21.538550 | orchestrator | 2025-06-02 19:40:21.539085 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:40:21.539832 | orchestrator | Monday 02 June 2025 19:40:21 +0000 (0:00:00.184) 0:00:00.304 *********** 2025-06-02 19:40:25.093432 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:25.093717 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:25.093925 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:25.095522 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:25.095555 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:25.095566 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:25.095693 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:25.096088 | orchestrator | 2025-06-02 19:40:25.096503 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-02 19:40:25.096854 | orchestrator | 2025-06-02 19:40:25.097267 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:40:25.097706 | orchestrator | Monday 02 June 2025 19:40:25 +0000 (0:00:03.559) 0:00:03.863 *********** 2025-06-02 19:40:25.171496 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 19:40:25.205968 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 19:40:25.206142 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-02 19:40:25.206158 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 19:40:25.206235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 19:40:25.206249 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 19:40:25.250287 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 19:40:25.250396 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 19:40:25.250493 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-02 19:40:25.250512 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 19:40:25.250690 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 19:40:25.294112 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 19:40:25.296124 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 19:40:25.296281 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 19:40:25.296593 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 19:40:25.296780 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-02 19:40:25.509034 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 19:40:25.509317 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-02 19:40:25.509999 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:40:25.510589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 19:40:25.511187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-02 19:40:25.511762 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-02 19:40:25.512145 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 19:40:25.512607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 19:40:25.513105 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:40:25.513612 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-02 19:40:25.514161 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:40:25.514614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 19:40:25.514904 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-02 19:40:25.515392 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 19:40:25.515854 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-02 19:40:25.516277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 19:40:25.516760 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 19:40:25.517126 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 19:40:25.517684 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 19:40:25.517936 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-02 19:40:25.518350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 19:40:25.518777 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 19:40:25.519076 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 19:40:25.519486 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-02 19:40:25.519862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 19:40:25.520181 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 19:40:25.520596 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 19:40:25.521496 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-02 19:40:25.522138 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:40:25.522798 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-02 19:40:25.523545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-02 19:40:25.524477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 19:40:25.524784 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-02 19:40:25.525324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-02 19:40:25.525908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 19:40:25.526367 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:40:25.527056 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-02 19:40:25.527662 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:40:25.527966 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-02 19:40:25.528508 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:40:25.528862 | orchestrator | 2025-06-02 19:40:25.529244 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-02 19:40:25.529564 | orchestrator | 2025-06-02 19:40:25.529951 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-02 19:40:25.530264 | orchestrator | Monday 02 June 2025 19:40:25 +0000 (0:00:00.414) 0:00:04.278 *********** 2025-06-02 19:40:26.715970 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:26.718213 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:26.718558 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:26.719142 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:26.719479 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:26.719921 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:26.720288 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:26.721750 | orchestrator | 2025-06-02 19:40:26.721954 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-02 19:40:26.722892 | orchestrator | Monday 02 June 2025 19:40:26 +0000 (0:00:01.207) 0:00:05.486 *********** 2025-06-02 19:40:27.998516 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:27.998848 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:27.999375 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:28.003398 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:28.003772 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:28.003799 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:28.003807 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:28.003819 | orchestrator | 2025-06-02 19:40:28.003832 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-02 19:40:28.003845 | orchestrator | Monday 02 June 2025 19:40:27 +0000 (0:00:01.280) 0:00:06.766 *********** 2025-06-02 19:40:28.266324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:40:28.267574 | orchestrator | 2025-06-02 19:40:28.268299 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-02 19:40:28.269028 | orchestrator | Monday 02 June 2025 19:40:28 +0000 (0:00:00.269) 0:00:07.036 *********** 2025-06-02 19:40:30.410295 | orchestrator | changed: [testbed-manager] 2025-06-02 19:40:30.411457 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:40:30.412301 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:40:30.414442 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:40:30.415333 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:40:30.416278 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:40:30.417333 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:40:30.418303 | orchestrator | 2025-06-02 19:40:30.419402 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-02 19:40:30.419444 | orchestrator | Monday 02 June 2025 19:40:30 +0000 (0:00:02.140) 0:00:09.176 *********** 2025-06-02 19:40:30.483921 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:40:30.673557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:40:30.673702 | orchestrator | 2025-06-02 19:40:30.674329 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-02 19:40:30.675101 | orchestrator | Monday 02 June 2025 19:40:30 +0000 (0:00:00.266) 0:00:09.442 *********** 2025-06-02 19:40:31.638439 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:40:31.638761 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:40:31.639155 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:40:31.640035 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:40:31.640915 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:40:31.641163 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:40:31.641769 | orchestrator | 2025-06-02 19:40:31.642502 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-02 19:40:31.643728 | orchestrator | Monday 02 June 2025 19:40:31 +0000 (0:00:00.964) 0:00:10.407 *********** 2025-06-02 19:40:31.706231 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:40:32.316548 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:40:32.316826 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:40:32.318581 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:40:32.319694 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:40:32.320476 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:40:32.320954 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:40:32.321909 | orchestrator | 2025-06-02 19:40:32.322461 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-02 19:40:32.323206 | orchestrator | Monday 02 June 2025 19:40:32 +0000 (0:00:00.678) 0:00:11.085 *********** 2025-06-02 19:40:32.404930 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:40:32.430422 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:40:32.455987 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:40:32.721278 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:40:32.721846 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:40:32.722908 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:40:32.723575 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:32.724165 | orchestrator | 2025-06-02 19:40:32.724899 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 19:40:32.725289 | orchestrator | Monday 02 June 2025 19:40:32 +0000 (0:00:00.405) 0:00:11.491 *********** 2025-06-02 19:40:32.795790 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:40:32.824101 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:40:32.856149 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:40:32.874139 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:40:32.939268 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:40:32.940150 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:40:32.941319 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:40:32.942378 | orchestrator | 2025-06-02 19:40:32.943929 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 19:40:32.944568 | orchestrator | Monday 02 June 2025 19:40:32 +0000 (0:00:00.218) 0:00:11.709 *********** 2025-06-02 19:40:33.231133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:40:33.232209 | orchestrator | 2025-06-02 19:40:33.233087 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 19:40:33.234134 | orchestrator | Monday 02 June 2025 19:40:33 +0000 (0:00:00.290) 0:00:12.000 *********** 2025-06-02 19:40:33.517674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:40:33.518195 | orchestrator | 2025-06-02 19:40:33.518949 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 19:40:33.519737 | orchestrator | Monday 02 June 2025 19:40:33 +0000 (0:00:00.285) 0:00:12.285 *********** 2025-06-02 19:40:34.951214 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:34.951945 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:34.953080 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:34.954506 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:34.955533 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:34.956214 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:34.956973 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:34.957748 | orchestrator | 2025-06-02 19:40:34.958743 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 19:40:34.959021 | orchestrator | Monday 02 June 2025 19:40:34 +0000 (0:00:01.433) 0:00:13.719 *********** 2025-06-02 19:40:35.021367 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:40:35.043951 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:40:35.070817 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:40:35.093849 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:40:35.152035 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:40:35.152173 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:40:35.152441 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:40:35.152863 | orchestrator | 2025-06-02 19:40:35.153216 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 19:40:35.153516 | orchestrator | Monday 02 June 2025 19:40:35 +0000 (0:00:00.202) 0:00:13.922 *********** 2025-06-02 19:40:35.694551 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:35.695291 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:35.696369 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:35.697734 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:35.698533 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:35.699278 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:35.700026 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:35.700783 | orchestrator | 2025-06-02 19:40:35.701788 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 19:40:35.702250 | orchestrator | Monday 02 June 2025 19:40:35 +0000 (0:00:00.540) 0:00:14.462 *********** 2025-06-02 19:40:35.793939 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:40:35.822469 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:40:35.842696 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:40:35.920976 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:40:35.921757 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:40:35.922131 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:40:35.922935 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:40:35.923580 | orchestrator | 2025-06-02 19:40:35.924129 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 19:40:35.924791 | orchestrator | Monday 02 June 2025 19:40:35 +0000 (0:00:00.227) 0:00:14.690 *********** 2025-06-02 19:40:36.463553 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:36.464794 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:40:36.465947 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:40:36.467230 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:40:36.468403 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:40:36.469468 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:40:36.470334 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:40:36.470854 | orchestrator | 2025-06-02 19:40:36.471880 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 19:40:36.472851 | orchestrator | Monday 02 June 2025 19:40:36 +0000 (0:00:00.542) 0:00:15.232 *********** 2025-06-02 19:40:37.545429 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:37.547841 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:40:37.549578 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:40:37.550994 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:40:37.552211 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:40:37.552817 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:40:37.554341 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:40:37.555415 | orchestrator | 2025-06-02 19:40:37.558160 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 19:40:37.559268 | orchestrator | Monday 02 June 2025 19:40:37 +0000 (0:00:01.079) 0:00:16.311 *********** 2025-06-02 19:40:38.681304 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:38.682166 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:38.682837 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:38.683291 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:38.683779 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:38.684088 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:38.684748 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:38.685211 | orchestrator | 2025-06-02 19:40:38.685606 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 19:40:38.686152 | orchestrator | Monday 02 June 2025 19:40:38 +0000 (0:00:01.137) 0:00:17.449 *********** 2025-06-02 19:40:39.041444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:40:39.041849 | orchestrator | 2025-06-02 19:40:39.042717 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 19:40:39.046486 | orchestrator | Monday 02 June 2025 19:40:39 +0000 (0:00:00.361) 0:00:17.811 *********** 2025-06-02 19:40:39.115892 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:40:40.306200 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:40:40.306347 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:40:40.306365 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:40:40.306377 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:40:40.306388 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:40:40.306469 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:40:40.308165 | orchestrator | 2025-06-02 19:40:40.308609 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 19:40:40.309178 | orchestrator | Monday 02 June 2025 19:40:40 +0000 (0:00:01.258) 0:00:19.069 *********** 2025-06-02 19:40:40.390290 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:40.419073 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:40.444270 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:40.473148 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:40.526544 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:40.526750 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:40.527185 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:40.527724 | orchestrator | 2025-06-02 19:40:40.530084 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 19:40:40.530245 | orchestrator | Monday 02 June 2025 19:40:40 +0000 (0:00:00.226) 0:00:19.296 *********** 2025-06-02 19:40:40.603470 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:40.622734 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:40.646954 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:40.669853 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:40.737093 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:40.737719 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:40.738388 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:40.739230 | orchestrator | 2025-06-02 19:40:40.740477 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 19:40:40.741240 | orchestrator | Monday 02 June 2025 19:40:40 +0000 (0:00:00.210) 0:00:19.507 *********** 2025-06-02 19:40:40.805140 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:40.839569 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:40.862659 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:40.887509 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:40.951969 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:40.952045 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:40.953370 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:40.953381 | orchestrator | 2025-06-02 19:40:40.954546 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 19:40:40.955203 | orchestrator | Monday 02 June 2025 19:40:40 +0000 (0:00:00.212) 0:00:19.719 *********** 2025-06-02 19:40:41.239237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:40:41.239762 | orchestrator | 2025-06-02 19:40:41.240528 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 19:40:41.241040 | orchestrator | Monday 02 June 2025 19:40:41 +0000 (0:00:00.289) 0:00:20.009 *********** 2025-06-02 19:40:41.750987 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:41.751914 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:41.752181 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:41.753355 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:41.753918 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:41.755472 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:41.755912 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:41.756504 | orchestrator | 2025-06-02 19:40:41.757278 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 19:40:41.758095 | orchestrator | Monday 02 June 2025 19:40:41 +0000 (0:00:00.509) 0:00:20.518 *********** 2025-06-02 19:40:41.822279 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:40:41.848925 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:40:41.873403 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:40:41.904854 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:40:41.974078 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:40:41.974227 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:40:41.974759 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:40:41.977109 | orchestrator | 2025-06-02 19:40:41.977804 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 19:40:41.978349 | orchestrator | Monday 02 June 2025 19:40:41 +0000 (0:00:00.225) 0:00:20.743 *********** 2025-06-02 19:40:43.035812 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:43.035937 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:40:43.037448 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:40:43.038466 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:43.039210 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:43.040255 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:40:43.041330 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:43.041712 | orchestrator | 2025-06-02 19:40:43.042838 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 19:40:43.043581 | orchestrator | Monday 02 June 2025 19:40:43 +0000 (0:00:01.057) 0:00:21.801 *********** 2025-06-02 19:40:43.576600 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:43.577177 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:43.577971 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:43.579122 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:43.579923 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:43.580783 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:43.581544 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:43.582280 | orchestrator | 2025-06-02 19:40:43.582824 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 19:40:43.583748 | orchestrator | Monday 02 June 2025 19:40:43 +0000 (0:00:00.543) 0:00:22.344 *********** 2025-06-02 19:40:44.617422 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:44.619753 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:44.620532 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:40:44.620866 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:44.621596 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:40:44.621842 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:40:44.622500 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:44.622833 | orchestrator | 2025-06-02 19:40:44.623454 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 19:40:44.623932 | orchestrator | Monday 02 June 2025 19:40:44 +0000 (0:00:01.040) 0:00:23.385 *********** 2025-06-02 19:40:59.895473 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:59.896266 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:59.898084 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:59.898823 | orchestrator | changed: [testbed-manager] 2025-06-02 19:40:59.899407 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:40:59.900313 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:40:59.901993 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:40:59.902495 | orchestrator | 2025-06-02 19:40:59.903913 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-02 19:40:59.904320 | orchestrator | Monday 02 June 2025 19:40:59 +0000 (0:00:15.274) 0:00:38.660 *********** 2025-06-02 19:40:59.993324 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:00.030089 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:00.054506 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:00.083300 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:00.138747 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:00.140238 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:00.141579 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:00.142908 | orchestrator | 2025-06-02 19:41:00.144040 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-02 19:41:00.145200 | orchestrator | Monday 02 June 2025 19:41:00 +0000 (0:00:00.247) 0:00:38.908 *********** 2025-06-02 19:41:00.212293 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:00.237924 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:00.260884 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:00.286117 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:00.337113 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:00.338909 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:00.339410 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:00.340829 | orchestrator | 2025-06-02 19:41:00.343154 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-02 19:41:00.344286 | orchestrator | Monday 02 June 2025 19:41:00 +0000 (0:00:00.198) 0:00:39.106 *********** 2025-06-02 19:41:00.416840 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:00.438862 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:00.464993 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:00.488277 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:00.555221 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:00.556045 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:00.559848 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:00.559886 | orchestrator | 2025-06-02 19:41:00.559898 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-02 19:41:00.559911 | orchestrator | Monday 02 June 2025 19:41:00 +0000 (0:00:00.218) 0:00:39.324 *********** 2025-06-02 19:41:00.829591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:41:00.830258 | orchestrator | 2025-06-02 19:41:00.836517 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-02 19:41:00.836581 | orchestrator | Monday 02 June 2025 19:41:00 +0000 (0:00:00.274) 0:00:39.598 *********** 2025-06-02 19:41:02.422520 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:02.423944 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:02.425025 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:02.428470 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:02.429501 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:02.430822 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:02.431710 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:02.432599 | orchestrator | 2025-06-02 19:41:02.432920 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-02 19:41:02.433893 | orchestrator | Monday 02 June 2025 19:41:02 +0000 (0:00:01.591) 0:00:41.190 *********** 2025-06-02 19:41:03.476221 | orchestrator | changed: [testbed-manager] 2025-06-02 19:41:03.477603 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:41:03.477860 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:41:03.479037 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:41:03.480295 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:41:03.481388 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:41:03.482153 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:41:03.482988 | orchestrator | 2025-06-02 19:41:03.484182 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-02 19:41:03.484945 | orchestrator | Monday 02 June 2025 19:41:03 +0000 (0:00:01.054) 0:00:42.244 *********** 2025-06-02 19:41:04.307720 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:04.308776 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:04.311712 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:04.312592 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:04.313374 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:04.314115 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:04.314998 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:04.316023 | orchestrator | 2025-06-02 19:41:04.316839 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-02 19:41:04.317335 | orchestrator | Monday 02 June 2025 19:41:04 +0000 (0:00:00.832) 0:00:43.077 *********** 2025-06-02 19:41:04.592838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:41:04.593008 | orchestrator | 2025-06-02 19:41:04.593495 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-02 19:41:04.594125 | orchestrator | Monday 02 June 2025 19:41:04 +0000 (0:00:00.284) 0:00:43.361 *********** 2025-06-02 19:41:05.628147 | orchestrator | changed: [testbed-manager] 2025-06-02 19:41:05.628581 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:41:05.629734 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:41:05.630904 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:41:05.631831 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:41:05.632378 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:41:05.633318 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:41:05.633766 | orchestrator | 2025-06-02 19:41:05.634670 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-02 19:41:05.635319 | orchestrator | Monday 02 June 2025 19:41:05 +0000 (0:00:01.034) 0:00:44.395 *********** 2025-06-02 19:41:05.703034 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:41:05.727043 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:41:05.752917 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:41:05.777126 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:41:05.920730 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:41:05.922167 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:41:05.923087 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:41:05.924638 | orchestrator | 2025-06-02 19:41:05.925650 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-02 19:41:05.926399 | orchestrator | Monday 02 June 2025 19:41:05 +0000 (0:00:00.292) 0:00:44.688 *********** 2025-06-02 19:41:17.002144 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:41:17.002270 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:41:17.002287 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:41:17.002299 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:41:17.002508 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:41:17.003475 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:41:17.003994 | orchestrator | changed: [testbed-manager] 2025-06-02 19:41:17.004649 | orchestrator | 2025-06-02 19:41:17.005061 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-02 19:41:17.005588 | orchestrator | Monday 02 June 2025 19:41:16 +0000 (0:00:11.076) 0:00:55.765 *********** 2025-06-02 19:41:18.651128 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:18.651536 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:18.652729 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:18.653861 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:18.654745 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:18.656778 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:18.656810 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:18.657291 | orchestrator | 2025-06-02 19:41:18.658400 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-02 19:41:18.659260 | orchestrator | Monday 02 June 2025 19:41:18 +0000 (0:00:01.652) 0:00:57.418 *********** 2025-06-02 19:41:19.510442 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:19.512689 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:19.514883 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:19.515843 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:19.517012 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:19.518716 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:19.519509 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:19.521148 | orchestrator | 2025-06-02 19:41:19.521784 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-02 19:41:19.523785 | orchestrator | Monday 02 June 2025 19:41:19 +0000 (0:00:00.858) 0:00:58.276 *********** 2025-06-02 19:41:19.579795 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:19.602386 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:19.626820 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:19.653138 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:19.705445 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:19.705960 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:19.706448 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:19.706962 | orchestrator | 2025-06-02 19:41:19.707884 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-02 19:41:19.708090 | orchestrator | Monday 02 June 2025 19:41:19 +0000 (0:00:00.198) 0:00:58.475 *********** 2025-06-02 19:41:19.792352 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:19.817090 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:19.850348 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:19.872361 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:19.936477 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:19.937074 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:19.938280 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:19.939106 | orchestrator | 2025-06-02 19:41:19.939963 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-02 19:41:19.940711 | orchestrator | Monday 02 June 2025 19:41:19 +0000 (0:00:00.229) 0:00:58.705 *********** 2025-06-02 19:41:20.226305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:41:20.226483 | orchestrator | 2025-06-02 19:41:20.228136 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-02 19:41:20.230097 | orchestrator | Monday 02 June 2025 19:41:20 +0000 (0:00:00.288) 0:00:58.993 *********** 2025-06-02 19:41:21.760289 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:21.760422 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:21.761079 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:21.761937 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:21.762703 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:21.763365 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:21.764024 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:21.764759 | orchestrator | 2025-06-02 19:41:21.765208 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-02 19:41:21.765774 | orchestrator | Monday 02 June 2025 19:41:21 +0000 (0:00:01.533) 0:01:00.527 *********** 2025-06-02 19:41:22.278777 | orchestrator | changed: [testbed-manager] 2025-06-02 19:41:22.278889 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:41:22.279849 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:41:22.281357 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:41:22.281838 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:41:22.282799 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:41:22.283958 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:41:22.284928 | orchestrator | 2025-06-02 19:41:22.285653 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-02 19:41:22.287692 | orchestrator | Monday 02 June 2025 19:41:22 +0000 (0:00:00.519) 0:01:01.046 *********** 2025-06-02 19:41:22.362258 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:22.393063 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:22.420550 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:22.448651 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:22.522512 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:22.522706 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:22.523273 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:22.523634 | orchestrator | 2025-06-02 19:41:22.524491 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-02 19:41:22.524799 | orchestrator | Monday 02 June 2025 19:41:22 +0000 (0:00:00.246) 0:01:01.292 *********** 2025-06-02 19:41:23.610804 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:23.610970 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:23.612789 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:23.613062 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:23.613749 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:23.614372 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:23.614979 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:23.615046 | orchestrator | 2025-06-02 19:41:23.615690 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-02 19:41:23.616265 | orchestrator | Monday 02 June 2025 19:41:23 +0000 (0:00:01.085) 0:01:02.378 *********** 2025-06-02 19:41:25.319531 | orchestrator | changed: [testbed-manager] 2025-06-02 19:41:25.320601 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:41:25.321657 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:41:25.323493 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:41:25.324367 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:41:25.325085 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:41:25.325975 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:41:25.326963 | orchestrator | 2025-06-02 19:41:25.327513 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-02 19:41:25.328102 | orchestrator | Monday 02 June 2025 19:41:25 +0000 (0:00:01.708) 0:01:04.086 *********** 2025-06-02 19:41:27.480211 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:27.480338 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:27.480438 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:27.480912 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:27.481494 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:27.482886 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:27.483453 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:27.483864 | orchestrator | 2025-06-02 19:41:27.484395 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-02 19:41:27.484810 | orchestrator | Monday 02 June 2025 19:41:27 +0000 (0:00:02.160) 0:01:06.246 *********** 2025-06-02 19:42:05.792523 | orchestrator | ok: [testbed-manager] 2025-06-02 19:42:05.793068 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:42:05.794315 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:42:05.796683 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:42:05.797960 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:42:05.798922 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:42:05.799759 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:42:05.801094 | orchestrator | 2025-06-02 19:42:05.802214 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-02 19:42:05.803273 | orchestrator | Monday 02 June 2025 19:42:05 +0000 (0:00:38.312) 0:01:44.559 *********** 2025-06-02 19:43:23.231969 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:23.232091 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:23.232108 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:23.232119 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:23.232130 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:23.232204 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:23.233830 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:23.234150 | orchestrator | 2025-06-02 19:43:23.235051 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-02 19:43:23.235840 | orchestrator | Monday 02 June 2025 19:43:23 +0000 (0:01:17.439) 0:03:01.998 *********** 2025-06-02 19:43:24.885307 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:24.886055 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:24.888516 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:24.888555 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:24.889603 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:24.890658 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:24.891274 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:24.891929 | orchestrator | 2025-06-02 19:43:24.892880 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-02 19:43:24.893143 | orchestrator | Monday 02 June 2025 19:43:24 +0000 (0:00:01.654) 0:03:03.652 *********** 2025-06-02 19:43:36.534233 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:36.534337 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:36.534352 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:36.534626 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:36.534648 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:36.536148 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:36.536780 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:36.537382 | orchestrator | 2025-06-02 19:43:36.537984 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-02 19:43:36.538631 | orchestrator | Monday 02 June 2025 19:43:36 +0000 (0:00:11.643) 0:03:15.296 *********** 2025-06-02 19:43:36.900119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-02 19:43:36.900830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-02 19:43:36.901391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-02 19:43:36.902625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-02 19:43:36.904673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-02 19:43:36.904695 | orchestrator | 2025-06-02 19:43:36.904784 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-02 19:43:36.905658 | orchestrator | Monday 02 June 2025 19:43:36 +0000 (0:00:00.372) 0:03:15.669 *********** 2025-06-02 19:43:36.953706 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 19:43:36.977762 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:37.077306 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 19:43:37.543877 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:43:37.544698 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 19:43:37.544863 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:43:37.545824 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 19:43:37.546345 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:43:37.547448 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 19:43:37.548546 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 19:43:37.549954 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 19:43:37.550764 | orchestrator | 2025-06-02 19:43:37.551460 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-02 19:43:37.552135 | orchestrator | Monday 02 June 2025 19:43:37 +0000 (0:00:00.644) 0:03:16.313 *********** 2025-06-02 19:43:37.607813 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 19:43:37.607919 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 19:43:37.607934 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 19:43:37.608423 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 19:43:37.609244 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 19:43:37.642216 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 19:43:37.642442 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 19:43:37.643529 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 19:43:37.643938 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 19:43:37.647624 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 19:43:37.669964 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:37.724329 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 19:43:37.724426 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 19:43:37.724438 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 19:43:37.724728 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 19:43:37.724959 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 19:43:42.101396 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 19:43:42.102535 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 19:43:42.102997 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 19:43:42.104993 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 19:43:42.106430 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 19:43:42.108183 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 19:43:42.109323 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 19:43:42.110613 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 19:43:42.111183 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 19:43:42.112368 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 19:43:42.113147 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 19:43:42.114415 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 19:43:42.114734 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 19:43:42.115915 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 19:43:42.117226 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:43:42.118140 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 19:43:42.118952 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:43:42.119853 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 19:43:42.120336 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 19:43:42.120969 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 19:43:42.121484 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 19:43:42.122454 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 19:43:42.122814 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 19:43:42.123682 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 19:43:42.124034 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 19:43:42.124839 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 19:43:42.125595 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 19:43:42.125931 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:43:42.126702 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 19:43:42.126960 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 19:43:42.127466 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 19:43:42.127879 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 19:43:42.130989 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 19:43:42.131552 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 19:43:42.132038 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 19:43:42.132623 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 19:43:42.133158 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 19:43:42.133802 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 19:43:42.134502 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 19:43:42.135189 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 19:43:42.135665 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 19:43:42.136088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 19:43:42.136821 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 19:43:42.137169 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 19:43:42.137670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 19:43:42.138123 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 19:43:42.140531 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 19:43:42.140561 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 19:43:42.140572 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 19:43:42.140611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 19:43:42.140630 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 19:43:42.140649 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 19:43:42.140668 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 19:43:42.140754 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 19:43:42.141057 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 19:43:42.141755 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 19:43:42.142107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 19:43:42.142767 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 19:43:42.143096 | orchestrator | 2025-06-02 19:43:42.143632 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-02 19:43:42.144122 | orchestrator | Monday 02 June 2025 19:43:42 +0000 (0:00:04.555) 0:03:20.868 *********** 2025-06-02 19:43:43.580344 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:43:43.580871 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:43:43.582647 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:43:43.583002 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:43:43.583869 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:43:43.585016 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:43:43.585678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:43:43.586472 | orchestrator | 2025-06-02 19:43:43.587009 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-02 19:43:43.587690 | orchestrator | Monday 02 June 2025 19:43:43 +0000 (0:00:01.479) 0:03:22.348 *********** 2025-06-02 19:43:43.634353 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 19:43:43.664923 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:43.665132 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 19:43:43.693273 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:43:43.693730 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 19:43:43.694367 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 19:43:43.720897 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:43:43.747030 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:43:44.139381 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 19:43:44.139503 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 19:43:44.141295 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 19:43:44.142132 | orchestrator | 2025-06-02 19:43:44.143165 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-02 19:43:44.144205 | orchestrator | Monday 02 June 2025 19:43:44 +0000 (0:00:00.559) 0:03:22.908 *********** 2025-06-02 19:43:44.197538 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 19:43:44.233998 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 19:43:44.234119 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:44.234126 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 19:43:44.257990 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:43:44.288416 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:43:44.288769 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 19:43:44.313227 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:43:44.770375 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 19:43:44.771536 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 19:43:44.773250 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 19:43:44.774258 | orchestrator | 2025-06-02 19:43:44.775138 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-02 19:43:44.776204 | orchestrator | Monday 02 June 2025 19:43:44 +0000 (0:00:00.631) 0:03:23.539 *********** 2025-06-02 19:43:44.850857 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:44.873799 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:43:44.898991 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:43:44.919395 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:43:45.034356 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:43:45.035840 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:43:45.036889 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:43:45.037958 | orchestrator | 2025-06-02 19:43:45.038668 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-02 19:43:45.039642 | orchestrator | Monday 02 June 2025 19:43:45 +0000 (0:00:00.263) 0:03:23.803 *********** 2025-06-02 19:43:50.781628 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:50.783467 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:50.783549 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:50.784780 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:50.786712 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:50.787660 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:50.788593 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:50.788765 | orchestrator | 2025-06-02 19:43:50.789453 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-02 19:43:50.789833 | orchestrator | Monday 02 June 2025 19:43:50 +0000 (0:00:05.747) 0:03:29.550 *********** 2025-06-02 19:43:50.856776 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-02 19:43:50.893471 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-02 19:43:50.894217 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:50.895103 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-02 19:43:50.928893 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:43:50.929703 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-02 19:43:50.959843 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:43:50.998545 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-02 19:43:50.998770 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:43:50.999248 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-02 19:43:51.052186 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:43:51.053433 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:43:51.054116 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-02 19:43:51.054755 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:43:51.055751 | orchestrator | 2025-06-02 19:43:51.056080 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-02 19:43:51.056919 | orchestrator | Monday 02 June 2025 19:43:51 +0000 (0:00:00.271) 0:03:29.822 *********** 2025-06-02 19:43:52.079353 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-02 19:43:52.079462 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-02 19:43:52.079953 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-02 19:43:52.080770 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-02 19:43:52.081652 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-02 19:43:52.082732 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-02 19:43:52.083257 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-02 19:43:52.083911 | orchestrator | 2025-06-02 19:43:52.084519 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-02 19:43:52.085105 | orchestrator | Monday 02 June 2025 19:43:52 +0000 (0:00:01.023) 0:03:30.846 *********** 2025-06-02 19:43:52.567667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:43:52.568099 | orchestrator | 2025-06-02 19:43:52.570840 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-02 19:43:52.571302 | orchestrator | Monday 02 June 2025 19:43:52 +0000 (0:00:00.488) 0:03:31.335 *********** 2025-06-02 19:43:53.779764 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:53.780312 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:53.780352 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:53.781026 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:53.781543 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:53.782216 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:53.782716 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:53.783314 | orchestrator | 2025-06-02 19:43:53.783818 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-02 19:43:53.784593 | orchestrator | Monday 02 June 2025 19:43:53 +0000 (0:00:01.213) 0:03:32.548 *********** 2025-06-02 19:43:54.371941 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:54.373596 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:54.376747 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:54.376792 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:54.376802 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:54.377604 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:54.378484 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:54.378819 | orchestrator | 2025-06-02 19:43:54.379335 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-02 19:43:54.380051 | orchestrator | Monday 02 June 2025 19:43:54 +0000 (0:00:00.591) 0:03:33.140 *********** 2025-06-02 19:43:54.963127 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:54.963364 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:54.965910 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:54.966522 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:54.967096 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:54.967626 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:54.968373 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:54.968917 | orchestrator | 2025-06-02 19:43:54.969373 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-02 19:43:54.969853 | orchestrator | Monday 02 June 2025 19:43:54 +0000 (0:00:00.591) 0:03:33.731 *********** 2025-06-02 19:43:55.564481 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:55.565249 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:55.566302 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:55.567344 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:55.568019 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:55.569065 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:55.569808 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:55.570661 | orchestrator | 2025-06-02 19:43:55.571194 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-02 19:43:55.572473 | orchestrator | Monday 02 June 2025 19:43:55 +0000 (0:00:00.600) 0:03:34.332 *********** 2025-06-02 19:43:56.543539 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748891982.6711318, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.543702 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892043.6878405, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.543746 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892042.740016, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.544087 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892054.2241478, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.546766 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892044.9107318, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.547186 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892043.798507, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.548400 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892047.185809, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.549363 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748892006.1234772, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.549630 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891940.1895683, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.550820 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891937.312156, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.551754 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891946.5151596, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.552242 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891940.0844023, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.552884 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891942.5615396, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.553387 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891941.866988, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:43:56.553805 | orchestrator | 2025-06-02 19:43:56.554496 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-02 19:43:56.554783 | orchestrator | Monday 02 June 2025 19:43:56 +0000 (0:00:00.979) 0:03:35.311 *********** 2025-06-02 19:43:57.732071 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:57.735318 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:57.736023 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:57.736049 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:57.736062 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:57.736075 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:57.736874 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:57.738563 | orchestrator | 2025-06-02 19:43:57.739694 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-02 19:43:57.740356 | orchestrator | Monday 02 June 2025 19:43:57 +0000 (0:00:01.189) 0:03:36.500 *********** 2025-06-02 19:43:58.910492 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:58.910877 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:58.911639 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:58.912868 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:58.913908 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:58.914302 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:58.915383 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:58.915832 | orchestrator | 2025-06-02 19:43:58.916501 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-02 19:43:58.917159 | orchestrator | Monday 02 June 2025 19:43:58 +0000 (0:00:01.177) 0:03:37.678 *********** 2025-06-02 19:44:00.047089 | orchestrator | changed: [testbed-manager] 2025-06-02 19:44:00.047398 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:00.048433 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:00.050732 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:00.050759 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:00.050770 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:00.051800 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:00.052512 | orchestrator | 2025-06-02 19:44:00.052877 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-02 19:44:00.053560 | orchestrator | Monday 02 June 2025 19:44:00 +0000 (0:00:01.136) 0:03:38.815 *********** 2025-06-02 19:44:00.115670 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:44:00.164246 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:44:00.197801 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:44:00.230263 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:44:00.269777 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:44:00.330251 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:44:00.330426 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:44:00.331406 | orchestrator | 2025-06-02 19:44:00.332345 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-02 19:44:00.332992 | orchestrator | Monday 02 June 2025 19:44:00 +0000 (0:00:00.282) 0:03:39.098 *********** 2025-06-02 19:44:01.057760 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:01.059752 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:01.059829 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:01.059844 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:01.059856 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:01.059926 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:01.060896 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:01.061560 | orchestrator | 2025-06-02 19:44:01.062220 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-02 19:44:01.062861 | orchestrator | Monday 02 June 2025 19:44:01 +0000 (0:00:00.724) 0:03:39.823 *********** 2025-06-02 19:44:01.436786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:44:01.436950 | orchestrator | 2025-06-02 19:44:01.437501 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-02 19:44:01.437975 | orchestrator | Monday 02 June 2025 19:44:01 +0000 (0:00:00.383) 0:03:40.206 *********** 2025-06-02 19:44:09.904342 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:09.904459 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:09.906624 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:09.907688 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:09.909111 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:09.910310 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:09.910982 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:09.911487 | orchestrator | 2025-06-02 19:44:09.912230 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-02 19:44:09.912822 | orchestrator | Monday 02 June 2025 19:44:09 +0000 (0:00:08.462) 0:03:48.668 *********** 2025-06-02 19:44:11.125698 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:11.127595 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:11.128467 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:11.129784 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:11.131187 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:11.132906 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:11.133344 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:11.134299 | orchestrator | 2025-06-02 19:44:11.135343 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-02 19:44:11.135562 | orchestrator | Monday 02 June 2025 19:44:11 +0000 (0:00:01.223) 0:03:49.892 *********** 2025-06-02 19:44:12.099753 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:12.100099 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:12.104147 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:12.104201 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:12.104215 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:12.104225 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:12.104284 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:12.105394 | orchestrator | 2025-06-02 19:44:12.106748 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-02 19:44:12.106843 | orchestrator | Monday 02 June 2025 19:44:12 +0000 (0:00:00.975) 0:03:50.867 *********** 2025-06-02 19:44:12.583656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:44:12.583830 | orchestrator | 2025-06-02 19:44:12.584408 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-02 19:44:12.585748 | orchestrator | Monday 02 June 2025 19:44:12 +0000 (0:00:00.485) 0:03:51.352 *********** 2025-06-02 19:44:20.752365 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:20.752493 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:20.752635 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:20.753250 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:20.755384 | orchestrator | changed: [testbed-manager] 2025-06-02 19:44:20.756520 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:20.757564 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:20.758610 | orchestrator | 2025-06-02 19:44:20.759076 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-02 19:44:20.759889 | orchestrator | Monday 02 June 2025 19:44:20 +0000 (0:00:08.163) 0:03:59.516 *********** 2025-06-02 19:44:21.355170 | orchestrator | changed: [testbed-manager] 2025-06-02 19:44:21.355687 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:21.356669 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:21.358092 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:21.359031 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:21.359548 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:21.360274 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:21.361396 | orchestrator | 2025-06-02 19:44:21.362079 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-02 19:44:21.362429 | orchestrator | Monday 02 June 2025 19:44:21 +0000 (0:00:00.608) 0:04:00.125 *********** 2025-06-02 19:44:22.432639 | orchestrator | changed: [testbed-manager] 2025-06-02 19:44:22.432878 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:22.433564 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:22.433911 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:22.434752 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:22.435061 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:22.437965 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:22.438641 | orchestrator | 2025-06-02 19:44:22.439290 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-02 19:44:22.440165 | orchestrator | Monday 02 June 2025 19:44:22 +0000 (0:00:01.075) 0:04:01.200 *********** 2025-06-02 19:44:23.444389 | orchestrator | changed: [testbed-manager] 2025-06-02 19:44:23.444643 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:23.446419 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:23.446836 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:23.448080 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:23.448839 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:23.449843 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:23.450394 | orchestrator | 2025-06-02 19:44:23.451394 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-02 19:44:23.451958 | orchestrator | Monday 02 June 2025 19:44:23 +0000 (0:00:01.008) 0:04:02.209 *********** 2025-06-02 19:44:23.561825 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:23.608087 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:23.648059 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:23.682748 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:23.739668 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:23.739818 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:23.740654 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:23.741262 | orchestrator | 2025-06-02 19:44:23.741953 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-02 19:44:23.742684 | orchestrator | Monday 02 June 2025 19:44:23 +0000 (0:00:00.301) 0:04:02.510 *********** 2025-06-02 19:44:23.820813 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:23.865018 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:23.895740 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:23.931777 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:23.967959 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:24.052103 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:24.052417 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:24.052820 | orchestrator | 2025-06-02 19:44:24.053594 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-02 19:44:24.053976 | orchestrator | Monday 02 June 2025 19:44:24 +0000 (0:00:00.311) 0:04:02.822 *********** 2025-06-02 19:44:24.150603 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:24.188833 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:24.217469 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:24.270654 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:24.346141 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:24.346338 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:24.347010 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:24.347616 | orchestrator | 2025-06-02 19:44:24.348053 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-02 19:44:24.348647 | orchestrator | Monday 02 June 2025 19:44:24 +0000 (0:00:00.294) 0:04:03.116 *********** 2025-06-02 19:44:30.296526 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:30.296784 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:30.297588 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:30.298416 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:30.299862 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:30.300001 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:30.301100 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:30.302234 | orchestrator | 2025-06-02 19:44:30.302339 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-02 19:44:30.303457 | orchestrator | Monday 02 June 2025 19:44:30 +0000 (0:00:05.949) 0:04:09.065 *********** 2025-06-02 19:44:30.661640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:44:30.661749 | orchestrator | 2025-06-02 19:44:30.661857 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-02 19:44:30.662517 | orchestrator | Monday 02 June 2025 19:44:30 +0000 (0:00:00.360) 0:04:09.426 *********** 2025-06-02 19:44:30.737540 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-02 19:44:30.738235 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-02 19:44:30.738948 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-02 19:44:30.739847 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-02 19:44:30.787233 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:44:30.787807 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-02 19:44:30.788917 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-02 19:44:30.835285 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:44:30.835368 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-02 19:44:30.835382 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-02 19:44:30.872019 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:44:30.872149 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-02 19:44:30.912972 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:44:30.913049 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-02 19:44:30.913489 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-02 19:44:30.913712 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-02 19:44:30.989284 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:44:30.989446 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:44:30.989704 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-02 19:44:30.990725 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-02 19:44:30.990997 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:44:30.991654 | orchestrator | 2025-06-02 19:44:30.991888 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-02 19:44:30.992523 | orchestrator | Monday 02 June 2025 19:44:30 +0000 (0:00:00.332) 0:04:09.759 *********** 2025-06-02 19:44:31.415885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:44:31.416049 | orchestrator | 2025-06-02 19:44:31.417292 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-02 19:44:31.423505 | orchestrator | Monday 02 June 2025 19:44:31 +0000 (0:00:00.425) 0:04:10.185 *********** 2025-06-02 19:44:31.497283 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-02 19:44:31.497686 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-02 19:44:31.531130 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:44:31.573097 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:44:31.573792 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-02 19:44:31.574671 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-02 19:44:31.608769 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:44:31.658764 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:44:31.658918 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-02 19:44:31.660383 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-02 19:44:31.723236 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:44:31.727822 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:44:31.727853 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-02 19:44:31.727866 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:44:31.727877 | orchestrator | 2025-06-02 19:44:31.728046 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-02 19:44:31.728913 | orchestrator | Monday 02 June 2025 19:44:31 +0000 (0:00:00.307) 0:04:10.492 *********** 2025-06-02 19:44:32.258776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:44:32.260235 | orchestrator | 2025-06-02 19:44:32.260919 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-02 19:44:32.262173 | orchestrator | Monday 02 June 2025 19:44:32 +0000 (0:00:00.534) 0:04:11.026 *********** 2025-06-02 19:45:07.049827 | orchestrator | changed: [testbed-manager] 2025-06-02 19:45:07.049942 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:07.051036 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:07.053619 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:07.054534 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:07.055265 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:07.055675 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:07.056207 | orchestrator | 2025-06-02 19:45:07.057173 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-02 19:45:07.057234 | orchestrator | Monday 02 June 2025 19:45:07 +0000 (0:00:34.788) 0:04:45.815 *********** 2025-06-02 19:45:15.156976 | orchestrator | changed: [testbed-manager] 2025-06-02 19:45:15.157902 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:15.158853 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:15.159790 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:15.162204 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:15.162936 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:15.164169 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:15.164792 | orchestrator | 2025-06-02 19:45:15.165415 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-02 19:45:15.166228 | orchestrator | Monday 02 June 2025 19:45:15 +0000 (0:00:08.109) 0:04:53.925 *********** 2025-06-02 19:45:22.862220 | orchestrator | changed: [testbed-manager] 2025-06-02 19:45:22.862398 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:22.862784 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:22.863280 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:22.863903 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:22.864412 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:22.866974 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:22.867454 | orchestrator | 2025-06-02 19:45:22.868014 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-02 19:45:22.868644 | orchestrator | Monday 02 June 2025 19:45:22 +0000 (0:00:07.704) 0:05:01.629 *********** 2025-06-02 19:45:24.611357 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:24.612682 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:24.613251 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:24.615172 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:24.616441 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:24.618155 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:24.619904 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:24.622741 | orchestrator | 2025-06-02 19:45:24.624736 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-02 19:45:24.625725 | orchestrator | Monday 02 June 2025 19:45:24 +0000 (0:00:01.748) 0:05:03.378 *********** 2025-06-02 19:45:30.431659 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:30.431911 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:30.432484 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:30.433650 | orchestrator | changed: [testbed-manager] 2025-06-02 19:45:30.434132 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:30.434833 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:30.436652 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:30.437383 | orchestrator | 2025-06-02 19:45:30.438341 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-02 19:45:30.438605 | orchestrator | Monday 02 June 2025 19:45:30 +0000 (0:00:05.818) 0:05:09.196 *********** 2025-06-02 19:45:30.841088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:45:30.841251 | orchestrator | 2025-06-02 19:45:30.841364 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-02 19:45:30.841772 | orchestrator | Monday 02 June 2025 19:45:30 +0000 (0:00:00.411) 0:05:09.608 *********** 2025-06-02 19:45:31.569048 | orchestrator | changed: [testbed-manager] 2025-06-02 19:45:31.569218 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:31.572731 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:31.573160 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:31.574642 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:31.575594 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:31.576256 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:31.577036 | orchestrator | 2025-06-02 19:45:31.577808 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-02 19:45:31.578692 | orchestrator | Monday 02 June 2025 19:45:31 +0000 (0:00:00.727) 0:05:10.335 *********** 2025-06-02 19:45:33.261233 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:33.261343 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:33.261425 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:33.261753 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:33.263923 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:33.264519 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:33.266435 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:33.267282 | orchestrator | 2025-06-02 19:45:33.267725 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-02 19:45:33.269015 | orchestrator | Monday 02 June 2025 19:45:33 +0000 (0:00:01.689) 0:05:12.025 *********** 2025-06-02 19:45:34.117143 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:34.117296 | orchestrator | changed: [testbed-manager] 2025-06-02 19:45:34.120133 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:34.120161 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:34.120835 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:34.120855 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:34.121738 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:34.122285 | orchestrator | 2025-06-02 19:45:34.123124 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-02 19:45:34.123802 | orchestrator | Monday 02 June 2025 19:45:34 +0000 (0:00:00.858) 0:05:12.884 *********** 2025-06-02 19:45:34.236072 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:34.269648 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:34.304094 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:34.338178 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:34.398307 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:34.398430 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:34.398542 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:34.398739 | orchestrator | 2025-06-02 19:45:34.399243 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-02 19:45:34.399788 | orchestrator | Monday 02 June 2025 19:45:34 +0000 (0:00:00.282) 0:05:13.166 *********** 2025-06-02 19:45:34.472535 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:34.505322 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:34.537472 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:34.579605 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:34.635174 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:34.818829 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:34.819166 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:34.820087 | orchestrator | 2025-06-02 19:45:34.820964 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-02 19:45:34.821477 | orchestrator | Monday 02 June 2025 19:45:34 +0000 (0:00:00.417) 0:05:13.583 *********** 2025-06-02 19:45:34.915392 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:34.947270 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:34.982792 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:35.030139 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:35.092467 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:35.093378 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:35.093953 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:35.094664 | orchestrator | 2025-06-02 19:45:35.095306 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-02 19:45:35.095865 | orchestrator | Monday 02 June 2025 19:45:35 +0000 (0:00:00.278) 0:05:13.862 *********** 2025-06-02 19:45:35.155977 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:35.189638 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:35.232857 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:35.266216 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:35.297646 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:35.360291 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:35.360462 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:35.360480 | orchestrator | 2025-06-02 19:45:35.360616 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-02 19:45:35.360814 | orchestrator | Monday 02 June 2025 19:45:35 +0000 (0:00:00.268) 0:05:14.130 *********** 2025-06-02 19:45:35.467805 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:35.525844 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:35.570152 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:35.608420 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:35.693719 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:35.694747 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:35.695975 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:35.697144 | orchestrator | 2025-06-02 19:45:35.697986 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-02 19:45:35.699158 | orchestrator | Monday 02 June 2025 19:45:35 +0000 (0:00:00.332) 0:05:14.462 *********** 2025-06-02 19:45:35.786387 | orchestrator | ok: [testbed-manager] =>  2025-06-02 19:45:35.787315 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:45:35.819868 | orchestrator | ok: [testbed-node-0] =>  2025-06-02 19:45:35.820242 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:45:35.891151 | orchestrator | ok: [testbed-node-1] =>  2025-06-02 19:45:35.891639 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:45:35.922917 | orchestrator | ok: [testbed-node-2] =>  2025-06-02 19:45:35.922972 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:45:35.990728 | orchestrator | ok: [testbed-node-3] =>  2025-06-02 19:45:35.992498 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:45:35.992907 | orchestrator | ok: [testbed-node-4] =>  2025-06-02 19:45:35.994137 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:45:35.996040 | orchestrator | ok: [testbed-node-5] =>  2025-06-02 19:45:35.996463 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:45:35.997284 | orchestrator | 2025-06-02 19:45:35.998978 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-02 19:45:35.999339 | orchestrator | Monday 02 June 2025 19:45:35 +0000 (0:00:00.297) 0:05:14.760 *********** 2025-06-02 19:45:36.219613 | orchestrator | ok: [testbed-manager] =>  2025-06-02 19:45:36.220461 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:45:36.253921 | orchestrator | ok: [testbed-node-0] =>  2025-06-02 19:45:36.254635 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:45:36.289753 | orchestrator | ok: [testbed-node-1] =>  2025-06-02 19:45:36.290501 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:45:36.322330 | orchestrator | ok: [testbed-node-2] =>  2025-06-02 19:45:36.323055 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:45:36.392499 | orchestrator | ok: [testbed-node-3] =>  2025-06-02 19:45:36.393011 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:45:36.394971 | orchestrator | ok: [testbed-node-4] =>  2025-06-02 19:45:36.395729 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:45:36.396957 | orchestrator | ok: [testbed-node-5] =>  2025-06-02 19:45:36.397784 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:45:36.398536 | orchestrator | 2025-06-02 19:45:36.399465 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-02 19:45:36.400089 | orchestrator | Monday 02 June 2025 19:45:36 +0000 (0:00:00.400) 0:05:15.161 *********** 2025-06-02 19:45:36.491157 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:36.527892 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:36.556364 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:36.593779 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:36.663942 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:36.664692 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:36.665879 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:36.666392 | orchestrator | 2025-06-02 19:45:36.667620 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-02 19:45:36.668388 | orchestrator | Monday 02 June 2025 19:45:36 +0000 (0:00:00.273) 0:05:15.434 *********** 2025-06-02 19:45:36.744657 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:36.909916 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:36.909990 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:36.910003 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:36.924466 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:37.004710 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:37.005142 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:37.006083 | orchestrator | 2025-06-02 19:45:37.007330 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-02 19:45:37.008026 | orchestrator | Monday 02 June 2025 19:45:36 +0000 (0:00:00.339) 0:05:15.774 *********** 2025-06-02 19:45:37.462858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:45:37.463022 | orchestrator | 2025-06-02 19:45:37.465324 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-02 19:45:37.465353 | orchestrator | Monday 02 June 2025 19:45:37 +0000 (0:00:00.456) 0:05:16.230 *********** 2025-06-02 19:45:38.326773 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:38.328279 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:38.328310 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:38.330473 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:38.330520 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:38.331443 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:38.332243 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:38.333741 | orchestrator | 2025-06-02 19:45:38.334062 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-02 19:45:38.334524 | orchestrator | Monday 02 June 2025 19:45:38 +0000 (0:00:00.863) 0:05:17.094 *********** 2025-06-02 19:45:41.090207 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:41.090414 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:41.091692 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:41.094779 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:41.095739 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:41.096612 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:41.097330 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:41.098530 | orchestrator | 2025-06-02 19:45:41.099614 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-02 19:45:41.100635 | orchestrator | Monday 02 June 2025 19:45:41 +0000 (0:00:02.763) 0:05:19.858 *********** 2025-06-02 19:45:41.173751 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-02 19:45:41.174197 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-02 19:45:41.174825 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-02 19:45:41.240993 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:41.242112 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-02 19:45:41.242453 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-02 19:45:41.316314 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-02 19:45:41.316419 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-02 19:45:41.316433 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-02 19:45:41.397798 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:41.401457 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-02 19:45:41.401486 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-02 19:45:41.401499 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-02 19:45:41.403129 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-02 19:45:41.626445 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:41.626818 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-02 19:45:41.627647 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-02 19:45:41.631111 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-02 19:45:41.697167 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:41.698747 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-02 19:45:41.701805 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-02 19:45:41.701948 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-02 19:45:41.822481 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:41.823085 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:41.827084 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-02 19:45:41.827723 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-02 19:45:41.828831 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-02 19:45:41.829302 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:41.830162 | orchestrator | 2025-06-02 19:45:41.830848 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-02 19:45:41.833277 | orchestrator | Monday 02 June 2025 19:45:41 +0000 (0:00:00.730) 0:05:20.589 *********** 2025-06-02 19:45:48.301459 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:48.301739 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:48.302532 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:48.303729 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:48.305459 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:48.305861 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:48.306863 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:48.307673 | orchestrator | 2025-06-02 19:45:48.308545 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-02 19:45:48.308954 | orchestrator | Monday 02 June 2025 19:45:48 +0000 (0:00:06.480) 0:05:27.069 *********** 2025-06-02 19:45:49.314328 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:49.314788 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:49.315815 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:49.316525 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:49.319133 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:49.320057 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:49.320835 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:49.321344 | orchestrator | 2025-06-02 19:45:49.322106 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-02 19:45:49.322825 | orchestrator | Monday 02 June 2025 19:45:49 +0000 (0:00:01.011) 0:05:28.081 *********** 2025-06-02 19:45:57.269939 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:57.270007 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:57.270128 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:57.271662 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:57.272790 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:57.273655 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:57.274820 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:57.275458 | orchestrator | 2025-06-02 19:45:57.276262 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-02 19:45:57.277745 | orchestrator | Monday 02 June 2025 19:45:57 +0000 (0:00:07.953) 0:05:36.034 *********** 2025-06-02 19:46:01.950099 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:01.950472 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:01.951304 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:01.952395 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:01.952775 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:01.953231 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:01.953949 | orchestrator | changed: [testbed-manager] 2025-06-02 19:46:01.954131 | orchestrator | 2025-06-02 19:46:01.955000 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-02 19:46:01.956132 | orchestrator | Monday 02 June 2025 19:46:01 +0000 (0:00:04.683) 0:05:40.718 *********** 2025-06-02 19:46:03.450250 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:03.451485 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:03.452143 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:03.453101 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:03.454135 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:03.454870 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:03.455448 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:03.456170 | orchestrator | 2025-06-02 19:46:03.456705 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-02 19:46:03.457164 | orchestrator | Monday 02 June 2025 19:46:03 +0000 (0:00:01.497) 0:05:42.216 *********** 2025-06-02 19:46:04.763908 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:04.764921 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:04.765449 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:04.767833 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:04.768753 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:04.769666 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:04.770641 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:04.771909 | orchestrator | 2025-06-02 19:46:04.772998 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-02 19:46:04.773514 | orchestrator | Monday 02 June 2025 19:46:04 +0000 (0:00:01.314) 0:05:43.530 *********** 2025-06-02 19:46:04.962956 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:05.035703 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:05.099228 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:05.163178 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:05.362848 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:05.363805 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:05.365400 | orchestrator | changed: [testbed-manager] 2025-06-02 19:46:05.366290 | orchestrator | 2025-06-02 19:46:05.367510 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-02 19:46:05.368476 | orchestrator | Monday 02 June 2025 19:46:05 +0000 (0:00:00.601) 0:05:44.132 *********** 2025-06-02 19:46:15.199317 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:15.199436 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:15.200067 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:15.201312 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:15.202200 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:15.203283 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:15.203946 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:15.204975 | orchestrator | 2025-06-02 19:46:15.205278 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-02 19:46:15.206522 | orchestrator | Monday 02 June 2025 19:46:15 +0000 (0:00:09.831) 0:05:53.963 *********** 2025-06-02 19:46:16.148261 | orchestrator | changed: [testbed-manager] 2025-06-02 19:46:16.148358 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:16.148487 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:16.149313 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:16.149730 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:16.150170 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:16.152061 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:16.152220 | orchestrator | 2025-06-02 19:46:16.152755 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-02 19:46:16.153172 | orchestrator | Monday 02 June 2025 19:46:16 +0000 (0:00:00.952) 0:05:54.916 *********** 2025-06-02 19:46:25.164651 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:25.165796 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:25.166240 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:25.167833 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:25.170139 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:25.171291 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:25.172157 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:25.172401 | orchestrator | 2025-06-02 19:46:25.173687 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-02 19:46:25.176433 | orchestrator | Monday 02 June 2025 19:46:25 +0000 (0:00:09.015) 0:06:03.931 *********** 2025-06-02 19:46:36.100179 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:36.100293 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:36.102466 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:36.102521 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:36.102575 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:36.102588 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:36.102599 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:36.102610 | orchestrator | 2025-06-02 19:46:36.102622 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-02 19:46:36.102691 | orchestrator | Monday 02 June 2025 19:46:36 +0000 (0:00:10.935) 0:06:14.866 *********** 2025-06-02 19:46:36.513987 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-02 19:46:37.259878 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-02 19:46:37.261670 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-02 19:46:37.263070 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-02 19:46:37.264513 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-02 19:46:37.265562 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-02 19:46:37.266750 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-02 19:46:37.267677 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-02 19:46:37.268705 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-02 19:46:37.269102 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-02 19:46:37.269966 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-02 19:46:37.271011 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-02 19:46:37.271898 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-02 19:46:37.272637 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-02 19:46:37.273223 | orchestrator | 2025-06-02 19:46:37.273777 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-02 19:46:37.274895 | orchestrator | Monday 02 June 2025 19:46:37 +0000 (0:00:01.160) 0:06:16.027 *********** 2025-06-02 19:46:37.395422 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:37.459915 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:37.530898 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:37.594145 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:37.660889 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:37.776962 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:37.777120 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:37.777761 | orchestrator | 2025-06-02 19:46:37.778593 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-02 19:46:37.779302 | orchestrator | Monday 02 June 2025 19:46:37 +0000 (0:00:00.520) 0:06:16.547 *********** 2025-06-02 19:46:41.431194 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:41.431466 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:41.432024 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:41.434449 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:41.434477 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:41.437074 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:41.437754 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:41.438501 | orchestrator | 2025-06-02 19:46:41.439848 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-02 19:46:41.440598 | orchestrator | Monday 02 June 2025 19:46:41 +0000 (0:00:03.650) 0:06:20.198 *********** 2025-06-02 19:46:41.564528 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:41.628053 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:41.691479 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:41.763507 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:41.834822 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:41.925208 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:41.925426 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:41.926478 | orchestrator | 2025-06-02 19:46:41.930821 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-02 19:46:41.930924 | orchestrator | Monday 02 June 2025 19:46:41 +0000 (0:00:00.495) 0:06:20.693 *********** 2025-06-02 19:46:42.002072 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-02 19:46:42.002778 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-02 19:46:42.073516 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:42.073788 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-02 19:46:42.074699 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-02 19:46:42.141627 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:42.141727 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-02 19:46:42.142198 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-02 19:46:42.216605 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:42.217278 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-02 19:46:42.218200 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-02 19:46:42.286795 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:42.287556 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-02 19:46:42.287864 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-02 19:46:42.364019 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:42.364209 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-02 19:46:42.365491 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-02 19:46:42.496259 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:42.496961 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-02 19:46:42.498317 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-02 19:46:42.498936 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:42.502476 | orchestrator | 2025-06-02 19:46:42.503490 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-02 19:46:42.504024 | orchestrator | Monday 02 June 2025 19:46:42 +0000 (0:00:00.573) 0:06:21.266 *********** 2025-06-02 19:46:42.628127 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:42.726808 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:42.790845 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:42.853723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:42.923141 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:43.024221 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:43.024406 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:43.024703 | orchestrator | 2025-06-02 19:46:43.025146 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-02 19:46:43.025437 | orchestrator | Monday 02 June 2025 19:46:43 +0000 (0:00:00.526) 0:06:21.793 *********** 2025-06-02 19:46:43.156971 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:43.217806 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:43.277646 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:43.344797 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:43.408162 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:43.512826 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:43.513838 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:43.517643 | orchestrator | 2025-06-02 19:46:43.517718 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-02 19:46:43.517734 | orchestrator | Monday 02 June 2025 19:46:43 +0000 (0:00:00.486) 0:06:22.279 *********** 2025-06-02 19:46:43.645135 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:43.705643 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:43.773378 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:44.023874 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:44.091531 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:44.211036 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:44.211156 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:44.211835 | orchestrator | 2025-06-02 19:46:44.212019 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-02 19:46:44.212608 | orchestrator | Monday 02 June 2025 19:46:44 +0000 (0:00:00.699) 0:06:22.979 *********** 2025-06-02 19:46:45.900038 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:45.900146 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:45.901072 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:45.904339 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:45.904364 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:45.905515 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:45.906630 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:45.906746 | orchestrator | 2025-06-02 19:46:45.907795 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-02 19:46:45.910645 | orchestrator | Monday 02 June 2025 19:46:45 +0000 (0:00:01.688) 0:06:24.668 *********** 2025-06-02 19:46:46.758374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:46:46.759180 | orchestrator | 2025-06-02 19:46:46.760742 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-02 19:46:46.763326 | orchestrator | Monday 02 June 2025 19:46:46 +0000 (0:00:00.857) 0:06:25.526 *********** 2025-06-02 19:46:47.176837 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:47.573089 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:47.573461 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:47.574604 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:47.578712 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:47.578747 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:47.578759 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:47.578771 | orchestrator | 2025-06-02 19:46:47.578783 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-02 19:46:47.578795 | orchestrator | Monday 02 June 2025 19:46:47 +0000 (0:00:00.814) 0:06:26.341 *********** 2025-06-02 19:46:47.992707 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:48.057223 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:48.597016 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:48.597581 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:48.597814 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:48.598590 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:48.599207 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:48.599835 | orchestrator | 2025-06-02 19:46:48.600622 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-02 19:46:48.601267 | orchestrator | Monday 02 June 2025 19:46:48 +0000 (0:00:01.024) 0:06:27.365 *********** 2025-06-02 19:46:49.958653 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:49.960279 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:49.963241 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:49.964309 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:49.965411 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:49.966583 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:49.967194 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:49.967990 | orchestrator | 2025-06-02 19:46:49.968638 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-02 19:46:49.968980 | orchestrator | Monday 02 June 2025 19:46:49 +0000 (0:00:01.359) 0:06:28.725 *********** 2025-06-02 19:46:50.088778 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:51.356136 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:51.356295 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:51.357490 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:51.358292 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:51.359141 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:51.359984 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:51.360963 | orchestrator | 2025-06-02 19:46:51.361488 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-02 19:46:51.361783 | orchestrator | Monday 02 June 2025 19:46:51 +0000 (0:00:01.396) 0:06:30.122 *********** 2025-06-02 19:46:52.719444 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:52.720084 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:52.720971 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:52.722196 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:52.723118 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:52.724056 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:52.724816 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:52.725724 | orchestrator | 2025-06-02 19:46:52.726726 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-02 19:46:52.727438 | orchestrator | Monday 02 June 2025 19:46:52 +0000 (0:00:01.363) 0:06:31.485 *********** 2025-06-02 19:46:54.055186 | orchestrator | changed: [testbed-manager] 2025-06-02 19:46:54.055707 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:54.056395 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:54.058162 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:54.060871 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:54.061011 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:54.061804 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:54.062436 | orchestrator | 2025-06-02 19:46:54.062955 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-02 19:46:54.063594 | orchestrator | Monday 02 June 2025 19:46:54 +0000 (0:00:01.337) 0:06:32.823 *********** 2025-06-02 19:46:55.094264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:46:55.094342 | orchestrator | 2025-06-02 19:46:55.094588 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-02 19:46:55.098599 | orchestrator | Monday 02 June 2025 19:46:55 +0000 (0:00:01.038) 0:06:33.862 *********** 2025-06-02 19:46:56.410255 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:56.410481 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:56.411573 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:56.412507 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:56.413412 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:56.414254 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:56.414702 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:56.415556 | orchestrator | 2025-06-02 19:46:56.416491 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-02 19:46:56.416990 | orchestrator | Monday 02 June 2025 19:46:56 +0000 (0:00:01.315) 0:06:35.178 *********** 2025-06-02 19:46:57.541342 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:57.541695 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:57.543308 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:57.543623 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:57.545241 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:57.545970 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:57.546810 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:57.547761 | orchestrator | 2025-06-02 19:46:57.548327 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-02 19:46:57.548935 | orchestrator | Monday 02 June 2025 19:46:57 +0000 (0:00:01.129) 0:06:36.307 *********** 2025-06-02 19:46:58.909123 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:58.909371 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:58.909482 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:58.910200 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:58.910638 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:58.911500 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:58.912843 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:58.913382 | orchestrator | 2025-06-02 19:46:58.913863 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-02 19:46:58.914198 | orchestrator | Monday 02 June 2025 19:46:58 +0000 (0:00:01.368) 0:06:37.676 *********** 2025-06-02 19:46:59.994941 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:59.995048 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:59.995639 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:59.995973 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:59.997025 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:59.999672 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:00.000198 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:00.000967 | orchestrator | 2025-06-02 19:47:00.001431 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-02 19:47:00.002488 | orchestrator | Monday 02 June 2025 19:46:59 +0000 (0:00:01.087) 0:06:38.763 *********** 2025-06-02 19:47:01.121852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:47:01.122670 | orchestrator | 2025-06-02 19:47:01.124061 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:47:01.125988 | orchestrator | Monday 02 June 2025 19:47:00 +0000 (0:00:00.845) 0:06:39.608 *********** 2025-06-02 19:47:01.126078 | orchestrator | 2025-06-02 19:47:01.127663 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:47:01.128431 | orchestrator | Monday 02 June 2025 19:47:00 +0000 (0:00:00.038) 0:06:39.647 *********** 2025-06-02 19:47:01.129643 | orchestrator | 2025-06-02 19:47:01.130695 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:47:01.131416 | orchestrator | Monday 02 June 2025 19:47:00 +0000 (0:00:00.037) 0:06:39.684 *********** 2025-06-02 19:47:01.132061 | orchestrator | 2025-06-02 19:47:01.135244 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:47:01.135289 | orchestrator | Monday 02 June 2025 19:47:00 +0000 (0:00:00.044) 0:06:39.729 *********** 2025-06-02 19:47:01.135301 | orchestrator | 2025-06-02 19:47:01.135408 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:47:01.136493 | orchestrator | Monday 02 June 2025 19:47:00 +0000 (0:00:00.038) 0:06:39.767 *********** 2025-06-02 19:47:01.137093 | orchestrator | 2025-06-02 19:47:01.137734 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:47:01.138524 | orchestrator | Monday 02 June 2025 19:47:01 +0000 (0:00:00.037) 0:06:39.804 *********** 2025-06-02 19:47:01.138912 | orchestrator | 2025-06-02 19:47:01.140018 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:47:01.141004 | orchestrator | Monday 02 June 2025 19:47:01 +0000 (0:00:00.044) 0:06:39.849 *********** 2025-06-02 19:47:01.141719 | orchestrator | 2025-06-02 19:47:01.142846 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 19:47:01.144184 | orchestrator | Monday 02 June 2025 19:47:01 +0000 (0:00:00.038) 0:06:39.888 *********** 2025-06-02 19:47:02.439094 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:02.440420 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:02.441029 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:02.442073 | orchestrator | 2025-06-02 19:47:02.442600 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-02 19:47:02.443304 | orchestrator | Monday 02 June 2025 19:47:02 +0000 (0:00:01.315) 0:06:41.204 *********** 2025-06-02 19:47:03.789787 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:03.792590 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:03.792626 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:03.792636 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:03.792645 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:03.792686 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:03.792736 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:03.793419 | orchestrator | 2025-06-02 19:47:03.793690 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-02 19:47:03.794208 | orchestrator | Monday 02 June 2025 19:47:03 +0000 (0:00:01.352) 0:06:42.556 *********** 2025-06-02 19:47:04.881015 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:04.881126 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:04.881141 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:04.881153 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:04.881165 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:04.881175 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:04.881186 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:04.881416 | orchestrator | 2025-06-02 19:47:04.881695 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-02 19:47:04.882355 | orchestrator | Monday 02 June 2025 19:47:04 +0000 (0:00:01.085) 0:06:43.642 *********** 2025-06-02 19:47:05.019236 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:47:07.745764 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:07.745998 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:07.748436 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:07.748701 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:07.749407 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:07.749912 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:07.750521 | orchestrator | 2025-06-02 19:47:07.751147 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-02 19:47:07.751769 | orchestrator | Monday 02 June 2025 19:47:07 +0000 (0:00:02.868) 0:06:46.511 *********** 2025-06-02 19:47:07.854636 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:07.854737 | orchestrator | 2025-06-02 19:47:07.855130 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-02 19:47:07.855797 | orchestrator | Monday 02 June 2025 19:47:07 +0000 (0:00:00.112) 0:06:46.623 *********** 2025-06-02 19:47:08.876389 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:08.876671 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:08.876948 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:08.878488 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:08.879412 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:08.879437 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:08.879451 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:08.879788 | orchestrator | 2025-06-02 19:47:08.880563 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-02 19:47:08.881134 | orchestrator | Monday 02 June 2025 19:47:08 +0000 (0:00:01.019) 0:06:47.643 *********** 2025-06-02 19:47:09.187069 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:47:09.250226 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:09.313504 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:09.382400 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:09.444300 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:09.559189 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:09.559432 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:09.560020 | orchestrator | 2025-06-02 19:47:09.560706 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-02 19:47:09.561329 | orchestrator | Monday 02 June 2025 19:47:09 +0000 (0:00:00.684) 0:06:48.328 *********** 2025-06-02 19:47:10.427213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:47:10.427766 | orchestrator | 2025-06-02 19:47:10.429045 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-02 19:47:10.430012 | orchestrator | Monday 02 June 2025 19:47:10 +0000 (0:00:00.867) 0:06:49.195 *********** 2025-06-02 19:47:11.358076 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:11.358803 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:11.359997 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:11.361462 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:11.362377 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:11.363283 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:11.364409 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:11.364715 | orchestrator | 2025-06-02 19:47:11.365452 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-02 19:47:11.366425 | orchestrator | Monday 02 June 2025 19:47:11 +0000 (0:00:00.929) 0:06:50.125 *********** 2025-06-02 19:47:14.066322 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-02 19:47:14.067670 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-02 19:47:14.067916 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-02 19:47:14.069274 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-02 19:47:14.070696 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-02 19:47:14.071345 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-02 19:47:14.072857 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-02 19:47:14.073271 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-02 19:47:14.074474 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-02 19:47:14.074903 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-02 19:47:14.076043 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-02 19:47:14.076865 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-02 19:47:14.077482 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-02 19:47:14.078095 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-02 19:47:14.078468 | orchestrator | 2025-06-02 19:47:14.079004 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-02 19:47:14.079709 | orchestrator | Monday 02 June 2025 19:47:14 +0000 (0:00:02.707) 0:06:52.832 *********** 2025-06-02 19:47:14.202350 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:47:14.264357 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:14.331382 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:14.392836 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:14.471011 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:14.576379 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:14.577682 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:14.578895 | orchestrator | 2025-06-02 19:47:14.580628 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-02 19:47:14.582143 | orchestrator | Monday 02 June 2025 19:47:14 +0000 (0:00:00.513) 0:06:53.346 *********** 2025-06-02 19:47:15.373482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:47:15.374700 | orchestrator | 2025-06-02 19:47:15.375063 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-02 19:47:15.376388 | orchestrator | Monday 02 June 2025 19:47:15 +0000 (0:00:00.793) 0:06:54.140 *********** 2025-06-02 19:47:15.916013 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:15.985952 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:16.398270 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:16.399416 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:16.400652 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:16.400867 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:16.401396 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:16.401949 | orchestrator | 2025-06-02 19:47:16.402802 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-02 19:47:16.403292 | orchestrator | Monday 02 June 2025 19:47:16 +0000 (0:00:01.025) 0:06:55.165 *********** 2025-06-02 19:47:16.812121 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:17.186787 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:17.187263 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:17.188651 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:17.189882 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:17.190772 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:17.191830 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:17.192907 | orchestrator | 2025-06-02 19:47:17.193705 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-02 19:47:17.194610 | orchestrator | Monday 02 June 2025 19:47:17 +0000 (0:00:00.788) 0:06:55.953 *********** 2025-06-02 19:47:17.321899 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:47:17.384438 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:17.448322 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:17.525895 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:17.589762 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:17.677189 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:17.677816 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:17.679023 | orchestrator | 2025-06-02 19:47:17.679823 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-02 19:47:17.680677 | orchestrator | Monday 02 June 2025 19:47:17 +0000 (0:00:00.491) 0:06:56.445 *********** 2025-06-02 19:47:19.047741 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:19.048920 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:19.049780 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:19.050584 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:19.052090 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:19.052796 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:19.053810 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:19.054465 | orchestrator | 2025-06-02 19:47:19.055484 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-02 19:47:19.056422 | orchestrator | Monday 02 June 2025 19:47:19 +0000 (0:00:01.370) 0:06:57.816 *********** 2025-06-02 19:47:19.211787 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:47:19.283645 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:19.346307 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:19.410310 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:19.477299 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:19.562958 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:19.563697 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:19.564686 | orchestrator | 2025-06-02 19:47:19.564872 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-02 19:47:19.565483 | orchestrator | Monday 02 June 2025 19:47:19 +0000 (0:00:00.514) 0:06:58.330 *********** 2025-06-02 19:47:27.185834 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:27.186921 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:27.187830 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:27.191746 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:27.191778 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:27.191789 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:27.191802 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:27.191813 | orchestrator | 2025-06-02 19:47:27.192718 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-02 19:47:27.193429 | orchestrator | Monday 02 June 2025 19:47:27 +0000 (0:00:07.621) 0:07:05.952 *********** 2025-06-02 19:47:28.469157 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:28.469263 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:28.469383 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:28.469799 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:28.470320 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:28.471329 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:28.471368 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:28.471597 | orchestrator | 2025-06-02 19:47:28.475555 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-02 19:47:28.475679 | orchestrator | Monday 02 June 2025 19:47:28 +0000 (0:00:01.282) 0:07:07.234 *********** 2025-06-02 19:47:30.166091 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:30.166693 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:30.168338 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:30.170382 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:30.171258 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:30.173023 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:30.173861 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:30.175418 | orchestrator | 2025-06-02 19:47:30.176311 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-02 19:47:30.176945 | orchestrator | Monday 02 June 2025 19:47:30 +0000 (0:00:01.699) 0:07:08.934 *********** 2025-06-02 19:47:31.798288 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:31.798569 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:31.800419 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:31.802778 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:31.804668 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:31.804760 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:31.806432 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:31.807829 | orchestrator | 2025-06-02 19:47:31.808922 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 19:47:31.810069 | orchestrator | Monday 02 June 2025 19:47:31 +0000 (0:00:01.628) 0:07:10.563 *********** 2025-06-02 19:47:32.295136 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:32.847131 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:32.847815 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:32.849759 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:32.852595 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:32.853481 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:32.854968 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:32.855996 | orchestrator | 2025-06-02 19:47:32.857032 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 19:47:32.858383 | orchestrator | Monday 02 June 2025 19:47:32 +0000 (0:00:01.053) 0:07:11.616 *********** 2025-06-02 19:47:32.985229 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:47:33.055094 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:33.120097 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:33.184177 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:33.260484 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:33.638969 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:33.640007 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:33.641572 | orchestrator | 2025-06-02 19:47:33.642804 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-02 19:47:33.643900 | orchestrator | Monday 02 June 2025 19:47:33 +0000 (0:00:00.790) 0:07:12.407 *********** 2025-06-02 19:47:33.775853 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:47:33.837233 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:33.906900 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:33.968881 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:34.032860 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:34.142316 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:34.142658 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:34.143708 | orchestrator | 2025-06-02 19:47:34.144393 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-02 19:47:34.145430 | orchestrator | Monday 02 June 2025 19:47:34 +0000 (0:00:00.504) 0:07:12.911 *********** 2025-06-02 19:47:34.270151 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:34.340893 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:34.407682 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:34.472130 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:34.702407 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:34.798865 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:34.799613 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:34.801873 | orchestrator | 2025-06-02 19:47:34.802857 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-02 19:47:34.803953 | orchestrator | Monday 02 June 2025 19:47:34 +0000 (0:00:00.655) 0:07:13.567 *********** 2025-06-02 19:47:34.935036 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:35.009453 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:35.074609 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:35.156609 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:35.222782 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:35.331361 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:35.332347 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:35.333469 | orchestrator | 2025-06-02 19:47:35.334259 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-02 19:47:35.334837 | orchestrator | Monday 02 June 2025 19:47:35 +0000 (0:00:00.532) 0:07:14.099 *********** 2025-06-02 19:47:35.468100 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:35.527905 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:35.598454 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:35.663374 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:35.726860 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:35.825287 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:35.825709 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:35.826485 | orchestrator | 2025-06-02 19:47:35.827483 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-02 19:47:35.828395 | orchestrator | Monday 02 June 2025 19:47:35 +0000 (0:00:00.495) 0:07:14.595 *********** 2025-06-02 19:47:41.571328 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:41.572238 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:41.572855 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:41.573808 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:41.574989 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:41.575840 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:41.577120 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:41.577834 | orchestrator | 2025-06-02 19:47:41.578118 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-02 19:47:41.578935 | orchestrator | Monday 02 June 2025 19:47:41 +0000 (0:00:05.741) 0:07:20.337 *********** 2025-06-02 19:47:41.775244 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:47:41.873605 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:41.945715 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:42.010004 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:42.126009 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:42.127354 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:42.127958 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:42.128785 | orchestrator | 2025-06-02 19:47:42.130278 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-02 19:47:42.131260 | orchestrator | Monday 02 June 2025 19:47:42 +0000 (0:00:00.556) 0:07:20.894 *********** 2025-06-02 19:47:43.179483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:47:43.181037 | orchestrator | 2025-06-02 19:47:43.182261 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-02 19:47:43.184500 | orchestrator | Monday 02 June 2025 19:47:43 +0000 (0:00:01.052) 0:07:21.946 *********** 2025-06-02 19:47:45.020608 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:45.022409 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:45.024834 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:45.025163 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:45.025639 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:45.026337 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:45.026867 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:45.027611 | orchestrator | 2025-06-02 19:47:45.028091 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-02 19:47:45.028648 | orchestrator | Monday 02 June 2025 19:47:45 +0000 (0:00:01.840) 0:07:23.787 *********** 2025-06-02 19:47:46.140231 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:46.140360 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:46.140442 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:46.141152 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:46.141177 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:46.141411 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:46.141880 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:46.144035 | orchestrator | 2025-06-02 19:47:46.145077 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-02 19:47:46.145552 | orchestrator | Monday 02 June 2025 19:47:46 +0000 (0:00:01.122) 0:07:24.909 *********** 2025-06-02 19:47:46.734164 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:47.146826 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:47.147336 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:47.148044 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:47.149157 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:47.151331 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:47.152573 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:47.153768 | orchestrator | 2025-06-02 19:47:47.155209 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-02 19:47:47.155398 | orchestrator | Monday 02 June 2025 19:47:47 +0000 (0:00:01.004) 0:07:25.914 *********** 2025-06-02 19:47:48.777502 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:47:48.778295 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:47:48.778704 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:47:48.779946 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:47:48.780478 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:47:48.781171 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:47:48.781620 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:47:48.781987 | orchestrator | 2025-06-02 19:47:48.782545 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-02 19:47:48.782838 | orchestrator | Monday 02 June 2025 19:47:48 +0000 (0:00:01.630) 0:07:27.545 *********** 2025-06-02 19:47:49.559948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:47:49.560045 | orchestrator | 2025-06-02 19:47:49.560126 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-02 19:47:49.560641 | orchestrator | Monday 02 June 2025 19:47:49 +0000 (0:00:00.780) 0:07:28.326 *********** 2025-06-02 19:47:58.403984 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:58.404098 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:58.405026 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:58.405562 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:58.406912 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:58.407468 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:58.408139 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:58.409302 | orchestrator | 2025-06-02 19:47:58.414192 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-02 19:47:58.414234 | orchestrator | Monday 02 June 2025 19:47:58 +0000 (0:00:08.843) 0:07:37.169 *********** 2025-06-02 19:48:00.139830 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:00.139943 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:00.139958 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:00.139970 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:00.140042 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:00.140246 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:00.140533 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:00.141891 | orchestrator | 2025-06-02 19:48:00.141920 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-02 19:48:00.141934 | orchestrator | Monday 02 June 2025 19:48:00 +0000 (0:00:01.731) 0:07:38.901 *********** 2025-06-02 19:48:01.405684 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:01.406695 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:01.407230 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:01.408124 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:01.408827 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:01.409484 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:01.410172 | orchestrator | 2025-06-02 19:48:01.410858 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-02 19:48:01.411337 | orchestrator | Monday 02 June 2025 19:48:01 +0000 (0:00:01.270) 0:07:40.171 *********** 2025-06-02 19:48:02.853383 | orchestrator | changed: [testbed-manager] 2025-06-02 19:48:02.853691 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:48:02.854346 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:48:02.856044 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:48:02.856257 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:48:02.857265 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:48:02.858180 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:48:02.860118 | orchestrator | 2025-06-02 19:48:02.860713 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-02 19:48:02.861653 | orchestrator | 2025-06-02 19:48:02.862569 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-02 19:48:02.863174 | orchestrator | Monday 02 June 2025 19:48:02 +0000 (0:00:01.450) 0:07:41.622 *********** 2025-06-02 19:48:02.978863 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:48:03.049410 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:48:03.107802 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:48:03.172221 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:48:03.232115 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:48:03.343044 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:48:03.343289 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:48:03.344758 | orchestrator | 2025-06-02 19:48:03.345297 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-02 19:48:03.349154 | orchestrator | 2025-06-02 19:48:03.350190 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-02 19:48:03.350566 | orchestrator | Monday 02 June 2025 19:48:03 +0000 (0:00:00.489) 0:07:42.111 *********** 2025-06-02 19:48:04.641050 | orchestrator | changed: [testbed-manager] 2025-06-02 19:48:04.641574 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:48:04.643437 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:48:04.643483 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:48:04.644178 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:48:04.644769 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:48:04.646404 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:48:04.647246 | orchestrator | 2025-06-02 19:48:04.647751 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-02 19:48:04.648339 | orchestrator | Monday 02 June 2025 19:48:04 +0000 (0:00:01.299) 0:07:43.410 *********** 2025-06-02 19:48:05.949716 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:05.949973 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:05.953571 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:05.953625 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:05.953637 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:05.953648 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:05.953658 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:05.954612 | orchestrator | 2025-06-02 19:48:05.955362 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-02 19:48:05.955572 | orchestrator | Monday 02 June 2025 19:48:05 +0000 (0:00:01.307) 0:07:44.718 *********** 2025-06-02 19:48:06.175815 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:48:06.228797 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:48:06.284767 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:48:06.338451 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:48:06.392055 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:48:06.730862 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:48:06.731055 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:48:06.735425 | orchestrator | 2025-06-02 19:48:06.735494 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-02 19:48:06.735557 | orchestrator | Monday 02 June 2025 19:48:06 +0000 (0:00:00.783) 0:07:45.501 *********** 2025-06-02 19:48:07.885299 | orchestrator | changed: [testbed-manager] 2025-06-02 19:48:07.885990 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:48:07.886741 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:48:07.888041 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:48:07.889193 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:48:07.890206 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:48:07.890891 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:48:07.891484 | orchestrator | 2025-06-02 19:48:07.892317 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-02 19:48:07.892834 | orchestrator | 2025-06-02 19:48:07.893571 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-02 19:48:07.894067 | orchestrator | Monday 02 June 2025 19:48:07 +0000 (0:00:01.149) 0:07:46.650 *********** 2025-06-02 19:48:08.709493 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:48:08.709743 | orchestrator | 2025-06-02 19:48:08.710700 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 19:48:08.711279 | orchestrator | Monday 02 June 2025 19:48:08 +0000 (0:00:00.824) 0:07:47.475 *********** 2025-06-02 19:48:09.477282 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:09.477427 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:09.478816 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:09.479933 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:09.480747 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:09.481421 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:09.482161 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:09.483278 | orchestrator | 2025-06-02 19:48:09.483737 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 19:48:09.484177 | orchestrator | Monday 02 June 2025 19:48:09 +0000 (0:00:00.766) 0:07:48.241 *********** 2025-06-02 19:48:10.560669 | orchestrator | changed: [testbed-manager] 2025-06-02 19:48:10.562292 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:48:10.566913 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:48:10.568767 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:48:10.568984 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:48:10.570495 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:48:10.571622 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:48:10.572942 | orchestrator | 2025-06-02 19:48:10.574003 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-02 19:48:10.575014 | orchestrator | Monday 02 June 2025 19:48:10 +0000 (0:00:01.084) 0:07:49.326 *********** 2025-06-02 19:48:11.515828 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:48:11.516564 | orchestrator | 2025-06-02 19:48:11.517720 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 19:48:11.518602 | orchestrator | Monday 02 June 2025 19:48:11 +0000 (0:00:00.953) 0:07:50.280 *********** 2025-06-02 19:48:11.915474 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:12.352961 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:12.354396 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:12.355395 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:12.355956 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:12.356932 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:12.357818 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:12.357985 | orchestrator | 2025-06-02 19:48:12.358707 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 19:48:12.358978 | orchestrator | Monday 02 June 2025 19:48:12 +0000 (0:00:00.836) 0:07:51.116 *********** 2025-06-02 19:48:13.452937 | orchestrator | changed: [testbed-manager] 2025-06-02 19:48:13.453117 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:48:13.453770 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:48:13.454693 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:48:13.455244 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:48:13.456465 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:48:13.456699 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:48:13.457367 | orchestrator | 2025-06-02 19:48:13.458978 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:48:13.459066 | orchestrator | 2025-06-02 19:48:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:48:13.459082 | orchestrator | 2025-06-02 19:48:13 | INFO  | Please wait and do not abort execution. 2025-06-02 19:48:13.459406 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-02 19:48:13.460126 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-02 19:48:13.461105 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 19:48:13.461675 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 19:48:13.461895 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 19:48:13.463104 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 19:48:13.463764 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 19:48:13.464413 | orchestrator | 2025-06-02 19:48:13.464987 | orchestrator | 2025-06-02 19:48:13.465550 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:48:13.466753 | orchestrator | Monday 02 June 2025 19:48:13 +0000 (0:00:01.101) 0:07:52.218 *********** 2025-06-02 19:48:13.467038 | orchestrator | =============================================================================== 2025-06-02 19:48:13.467641 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.44s 2025-06-02 19:48:13.468164 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.31s 2025-06-02 19:48:13.468752 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.79s 2025-06-02 19:48:13.469291 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.27s 2025-06-02 19:48:13.469943 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.64s 2025-06-02 19:48:13.470669 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.08s 2025-06-02 19:48:13.471338 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.94s 2025-06-02 19:48:13.471927 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.83s 2025-06-02 19:48:13.472552 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.02s 2025-06-02 19:48:13.472892 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.84s 2025-06-02 19:48:13.473582 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.46s 2025-06-02 19:48:13.474002 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.16s 2025-06-02 19:48:13.474468 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.11s 2025-06-02 19:48:13.474841 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.95s 2025-06-02 19:48:13.475392 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.70s 2025-06-02 19:48:13.475895 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.62s 2025-06-02 19:48:13.477250 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.48s 2025-06-02 19:48:13.477927 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.95s 2025-06-02 19:48:13.478682 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.82s 2025-06-02 19:48:13.479022 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.75s 2025-06-02 19:48:14.132754 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 19:48:14.132835 | orchestrator | + osism apply network 2025-06-02 19:48:16.211163 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:48:16.211265 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:48:16.211280 | orchestrator | Registering Redlock._release_script 2025-06-02 19:48:16.273106 | orchestrator | 2025-06-02 19:48:16 | INFO  | Task 2da41efb-1daf-4af0-88dd-fb10a6a64557 (network) was prepared for execution. 2025-06-02 19:48:16.273193 | orchestrator | 2025-06-02 19:48:16 | INFO  | It takes a moment until task 2da41efb-1daf-4af0-88dd-fb10a6a64557 (network) has been started and output is visible here. 2025-06-02 19:48:20.383690 | orchestrator | 2025-06-02 19:48:20.384357 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-02 19:48:20.384591 | orchestrator | 2025-06-02 19:48:20.388300 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-02 19:48:20.388663 | orchestrator | Monday 02 June 2025 19:48:20 +0000 (0:00:00.263) 0:00:00.263 *********** 2025-06-02 19:48:20.534463 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:20.609673 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:20.684961 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:20.759102 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:20.936842 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:21.064607 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:21.065649 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:21.066362 | orchestrator | 2025-06-02 19:48:21.067490 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-02 19:48:21.068596 | orchestrator | Monday 02 June 2025 19:48:21 +0000 (0:00:00.680) 0:00:00.944 *********** 2025-06-02 19:48:22.239723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:48:22.239870 | orchestrator | 2025-06-02 19:48:22.240119 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-02 19:48:22.240725 | orchestrator | Monday 02 June 2025 19:48:22 +0000 (0:00:01.175) 0:00:02.119 *********** 2025-06-02 19:48:24.125418 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:24.125569 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:24.127371 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:24.127389 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:24.130478 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:24.131123 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:24.132720 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:24.133040 | orchestrator | 2025-06-02 19:48:24.134410 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-02 19:48:24.134735 | orchestrator | Monday 02 June 2025 19:48:24 +0000 (0:00:01.886) 0:00:04.005 *********** 2025-06-02 19:48:25.810974 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:25.811157 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:25.811917 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:25.813096 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:25.813680 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:25.814355 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:25.815007 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:25.815559 | orchestrator | 2025-06-02 19:48:25.816303 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-02 19:48:25.816853 | orchestrator | Monday 02 June 2025 19:48:25 +0000 (0:00:01.683) 0:00:05.689 *********** 2025-06-02 19:48:26.351977 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-02 19:48:26.355573 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-02 19:48:26.838921 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-02 19:48:26.840201 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-02 19:48:26.841131 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-02 19:48:26.841804 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-02 19:48:26.845453 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-02 19:48:26.845538 | orchestrator | 2025-06-02 19:48:26.845551 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-02 19:48:26.845562 | orchestrator | Monday 02 June 2025 19:48:26 +0000 (0:00:01.032) 0:00:06.721 *********** 2025-06-02 19:48:30.486834 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 19:48:30.487349 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 19:48:30.487381 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 19:48:30.487689 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 19:48:30.489356 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 19:48:30.489989 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 19:48:30.490616 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 19:48:30.491130 | orchestrator | 2025-06-02 19:48:30.491653 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-02 19:48:30.492288 | orchestrator | Monday 02 June 2025 19:48:30 +0000 (0:00:03.639) 0:00:10.361 *********** 2025-06-02 19:48:31.922826 | orchestrator | changed: [testbed-manager] 2025-06-02 19:48:31.925865 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:48:31.926935 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:48:31.927864 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:48:31.928370 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:48:31.931665 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:48:31.932099 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:48:31.932862 | orchestrator | 2025-06-02 19:48:31.933459 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-02 19:48:31.934264 | orchestrator | Monday 02 June 2025 19:48:31 +0000 (0:00:01.440) 0:00:11.801 *********** 2025-06-02 19:48:33.822170 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 19:48:33.823219 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 19:48:33.825010 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 19:48:33.826070 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 19:48:33.826897 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 19:48:33.828075 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 19:48:33.828897 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 19:48:33.829836 | orchestrator | 2025-06-02 19:48:33.830691 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-02 19:48:33.831382 | orchestrator | Monday 02 June 2025 19:48:33 +0000 (0:00:01.899) 0:00:13.701 *********** 2025-06-02 19:48:34.264605 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:34.340735 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:34.879857 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:34.880557 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:34.882230 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:34.882672 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:34.884695 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:34.885180 | orchestrator | 2025-06-02 19:48:34.885805 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-02 19:48:34.886596 | orchestrator | Monday 02 June 2025 19:48:34 +0000 (0:00:01.059) 0:00:14.760 *********** 2025-06-02 19:48:35.066773 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:48:35.161060 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:48:35.241881 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:48:35.322970 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:48:35.398351 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:48:35.557762 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:48:35.558656 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:48:35.559776 | orchestrator | 2025-06-02 19:48:35.560783 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-02 19:48:35.561704 | orchestrator | Monday 02 June 2025 19:48:35 +0000 (0:00:00.680) 0:00:15.440 *********** 2025-06-02 19:48:37.679803 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:37.680409 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:37.681931 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:37.682083 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:37.684203 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:37.684960 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:37.685960 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:37.686752 | orchestrator | 2025-06-02 19:48:37.687708 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-02 19:48:37.688345 | orchestrator | Monday 02 June 2025 19:48:37 +0000 (0:00:02.116) 0:00:17.557 *********** 2025-06-02 19:48:37.932853 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:48:38.014557 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:48:38.100936 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:48:38.183228 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:48:38.578220 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:48:38.578700 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:48:38.579564 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-02 19:48:38.580260 | orchestrator | 2025-06-02 19:48:38.581048 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-02 19:48:38.581781 | orchestrator | Monday 02 June 2025 19:48:38 +0000 (0:00:00.897) 0:00:18.455 *********** 2025-06-02 19:48:40.266302 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:40.266478 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:48:40.267155 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:48:40.267243 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:48:40.269594 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:48:40.270108 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:48:40.271107 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:48:40.271667 | orchestrator | 2025-06-02 19:48:40.272300 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-02 19:48:40.272781 | orchestrator | Monday 02 June 2025 19:48:40 +0000 (0:00:01.688) 0:00:20.144 *********** 2025-06-02 19:48:41.480323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:48:41.480829 | orchestrator | 2025-06-02 19:48:41.481621 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 19:48:41.482837 | orchestrator | Monday 02 June 2025 19:48:41 +0000 (0:00:01.214) 0:00:21.358 *********** 2025-06-02 19:48:42.464890 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:42.465068 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:42.466392 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:42.467311 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:42.467736 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:42.468767 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:42.469436 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:42.470125 | orchestrator | 2025-06-02 19:48:42.470952 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-02 19:48:42.471661 | orchestrator | Monday 02 June 2025 19:48:42 +0000 (0:00:00.987) 0:00:22.346 *********** 2025-06-02 19:48:42.801536 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:42.884843 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:42.992771 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:43.077370 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:43.175767 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:43.317452 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:43.317633 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:43.317726 | orchestrator | 2025-06-02 19:48:43.318489 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 19:48:43.319239 | orchestrator | Monday 02 June 2025 19:48:43 +0000 (0:00:00.853) 0:00:23.200 *********** 2025-06-02 19:48:43.659661 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:48:43.660062 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:48:43.835466 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:48:43.835737 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:48:44.468007 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:48:44.468253 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:48:44.469330 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:48:44.469656 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:48:44.471021 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:48:44.475938 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:48:44.476113 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:48:44.476144 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:48:44.476416 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:48:44.477372 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:48:44.477978 | orchestrator | 2025-06-02 19:48:44.478447 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-02 19:48:44.478923 | orchestrator | Monday 02 June 2025 19:48:44 +0000 (0:00:01.147) 0:00:24.347 *********** 2025-06-02 19:48:44.639863 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:48:44.721898 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:48:44.803754 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:48:44.888522 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:48:44.973477 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:48:45.088242 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:48:45.088882 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:48:45.089820 | orchestrator | 2025-06-02 19:48:45.090432 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-02 19:48:45.095442 | orchestrator | Monday 02 June 2025 19:48:45 +0000 (0:00:00.622) 0:00:24.970 *********** 2025-06-02 19:48:48.762956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-3, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-5 2025-06-02 19:48:48.763298 | orchestrator | 2025-06-02 19:48:48.766690 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-02 19:48:48.766721 | orchestrator | Monday 02 June 2025 19:48:48 +0000 (0:00:03.671) 0:00:28.641 *********** 2025-06-02 19:48:53.513903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:53.518200 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:53.520514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:53.521824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:53.522448 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:53.523026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:53.523893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:53.524626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:53.525089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:53.525696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:53.526111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:53.526536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:53.526956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:53.527376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:53.527813 | orchestrator | 2025-06-02 19:48:53.528249 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-02 19:48:53.529737 | orchestrator | Monday 02 June 2025 19:48:53 +0000 (0:00:04.749) 0:00:33.391 *********** 2025-06-02 19:48:58.207637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:58.210738 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:58.210778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:58.211701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:58.213416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:58.213898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:58.215064 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:58.215572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:58.216481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:48:58.217050 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:58.217870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:58.218741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:58.219331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:58.220482 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:48:58.220663 | orchestrator | 2025-06-02 19:48:58.220999 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-02 19:48:58.221667 | orchestrator | Monday 02 June 2025 19:48:58 +0000 (0:00:04.697) 0:00:38.088 *********** 2025-06-02 19:48:59.470263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:48:59.470531 | orchestrator | 2025-06-02 19:48:59.471013 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 19:48:59.471206 | orchestrator | Monday 02 June 2025 19:48:59 +0000 (0:00:01.260) 0:00:39.349 *********** 2025-06-02 19:48:59.927944 | orchestrator | ok: [testbed-manager] 2025-06-02 19:49:00.197387 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:49:00.608573 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:49:00.609283 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:49:00.609865 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:49:00.611604 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:49:00.611720 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:49:00.612405 | orchestrator | 2025-06-02 19:49:00.613671 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 19:49:00.615580 | orchestrator | Monday 02 June 2025 19:49:00 +0000 (0:00:01.141) 0:00:40.490 *********** 2025-06-02 19:49:00.704839 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:49:00.705554 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:49:00.709594 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:49:00.709636 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:49:00.812905 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:49:00.814136 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:49:00.814809 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:49:00.820423 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:49:00.820454 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:49:00.910334 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:49:00.910613 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:49:00.911576 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:49:00.914420 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:49:00.914641 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:49:01.052322 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:49:01.053630 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:49:01.054997 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:49:01.056284 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:49:01.056949 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:49:01.212275 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:49:01.213017 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:49:01.214619 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:49:01.216041 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:49:01.217129 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:49:01.353769 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:49:01.355118 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:49:01.358968 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:49:01.360686 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:49:01.362133 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:49:02.834253 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:49:02.835645 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:49:02.837342 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:49:02.838994 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:49:02.840705 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:49:02.841967 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:49:02.843018 | orchestrator | 2025-06-02 19:49:02.844516 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-02 19:49:02.845556 | orchestrator | Monday 02 June 2025 19:49:02 +0000 (0:00:02.222) 0:00:42.713 *********** 2025-06-02 19:49:02.998989 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:49:03.093115 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:49:03.177470 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:49:03.265519 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:49:03.346434 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:49:03.506404 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:49:03.507693 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:49:03.509033 | orchestrator | 2025-06-02 19:49:03.511317 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-02 19:49:03.511847 | orchestrator | Monday 02 June 2025 19:49:03 +0000 (0:00:00.675) 0:00:43.388 *********** 2025-06-02 19:49:03.672316 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:49:03.749906 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:49:04.007465 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:49:04.100652 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:49:04.179318 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:49:04.220324 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:49:04.220458 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:49:04.221673 | orchestrator | 2025-06-02 19:49:04.223639 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:49:04.223714 | orchestrator | 2025-06-02 19:49:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:49:04.223728 | orchestrator | 2025-06-02 19:49:04 | INFO  | Please wait and do not abort execution. 2025-06-02 19:49:04.224651 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:49:04.225623 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:49:04.226351 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:49:04.226943 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:49:04.227585 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:49:04.228381 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:49:04.229055 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:49:04.229466 | orchestrator | 2025-06-02 19:49:04.229980 | orchestrator | 2025-06-02 19:49:04.230308 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:49:04.230733 | orchestrator | Monday 02 June 2025 19:49:04 +0000 (0:00:00.714) 0:00:44.103 *********** 2025-06-02 19:49:04.231349 | orchestrator | =============================================================================== 2025-06-02 19:49:04.231675 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.75s 2025-06-02 19:49:04.232246 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.70s 2025-06-02 19:49:04.233010 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.67s 2025-06-02 19:49:04.233738 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.64s 2025-06-02 19:49:04.234082 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.22s 2025-06-02 19:49:04.234786 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2025-06-02 19:49:04.235063 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.90s 2025-06-02 19:49:04.236097 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.89s 2025-06-02 19:49:04.237232 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.69s 2025-06-02 19:49:04.237700 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.68s 2025-06-02 19:49:04.238615 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.44s 2025-06-02 19:49:04.239608 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.26s 2025-06-02 19:49:04.240140 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.21s 2025-06-02 19:49:04.240738 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2025-06-02 19:49:04.241272 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.15s 2025-06-02 19:49:04.241953 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.14s 2025-06-02 19:49:04.242523 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.06s 2025-06-02 19:49:04.242973 | orchestrator | osism.commons.network : Create required directories --------------------- 1.03s 2025-06-02 19:49:04.243797 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-06-02 19:49:04.244295 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.90s 2025-06-02 19:49:04.855889 | orchestrator | + osism apply wireguard 2025-06-02 19:49:06.565819 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:49:06.565919 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:49:06.565934 | orchestrator | Registering Redlock._release_script 2025-06-02 19:49:06.623772 | orchestrator | 2025-06-02 19:49:06 | INFO  | Task cd9d5127-0eec-4f1f-a3a9-39e759b2caa6 (wireguard) was prepared for execution. 2025-06-02 19:49:06.623858 | orchestrator | 2025-06-02 19:49:06 | INFO  | It takes a moment until task cd9d5127-0eec-4f1f-a3a9-39e759b2caa6 (wireguard) has been started and output is visible here. 2025-06-02 19:49:10.290213 | orchestrator | 2025-06-02 19:49:10.290780 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-02 19:49:10.291331 | orchestrator | 2025-06-02 19:49:10.292034 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-02 19:49:10.293538 | orchestrator | Monday 02 June 2025 19:49:10 +0000 (0:00:00.214) 0:00:00.214 *********** 2025-06-02 19:49:11.445233 | orchestrator | ok: [testbed-manager] 2025-06-02 19:49:11.445334 | orchestrator | 2025-06-02 19:49:11.445349 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-02 19:49:11.445437 | orchestrator | Monday 02 June 2025 19:49:11 +0000 (0:00:01.155) 0:00:01.370 *********** 2025-06-02 19:49:17.184465 | orchestrator | changed: [testbed-manager] 2025-06-02 19:49:17.184623 | orchestrator | 2025-06-02 19:49:17.185029 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-02 19:49:17.185453 | orchestrator | Monday 02 June 2025 19:49:17 +0000 (0:00:05.738) 0:00:07.109 *********** 2025-06-02 19:49:17.732630 | orchestrator | changed: [testbed-manager] 2025-06-02 19:49:17.732945 | orchestrator | 2025-06-02 19:49:17.733843 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-02 19:49:17.735955 | orchestrator | Monday 02 June 2025 19:49:17 +0000 (0:00:00.549) 0:00:07.658 *********** 2025-06-02 19:49:18.156473 | orchestrator | changed: [testbed-manager] 2025-06-02 19:49:18.157746 | orchestrator | 2025-06-02 19:49:18.157856 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-02 19:49:18.158798 | orchestrator | Monday 02 June 2025 19:49:18 +0000 (0:00:00.422) 0:00:08.081 *********** 2025-06-02 19:49:18.674316 | orchestrator | ok: [testbed-manager] 2025-06-02 19:49:18.674554 | orchestrator | 2025-06-02 19:49:18.675055 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-02 19:49:18.675723 | orchestrator | Monday 02 June 2025 19:49:18 +0000 (0:00:00.517) 0:00:08.598 *********** 2025-06-02 19:49:19.189880 | orchestrator | ok: [testbed-manager] 2025-06-02 19:49:19.190530 | orchestrator | 2025-06-02 19:49:19.190838 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-02 19:49:19.191287 | orchestrator | Monday 02 June 2025 19:49:19 +0000 (0:00:00.515) 0:00:09.114 *********** 2025-06-02 19:49:19.574987 | orchestrator | ok: [testbed-manager] 2025-06-02 19:49:19.575952 | orchestrator | 2025-06-02 19:49:19.577779 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-02 19:49:19.578454 | orchestrator | Monday 02 June 2025 19:49:19 +0000 (0:00:00.386) 0:00:09.500 *********** 2025-06-02 19:49:20.819003 | orchestrator | changed: [testbed-manager] 2025-06-02 19:49:20.819210 | orchestrator | 2025-06-02 19:49:20.819835 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-02 19:49:20.819879 | orchestrator | Monday 02 June 2025 19:49:20 +0000 (0:00:01.243) 0:00:10.744 *********** 2025-06-02 19:49:21.737556 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:49:21.738872 | orchestrator | changed: [testbed-manager] 2025-06-02 19:49:21.739280 | orchestrator | 2025-06-02 19:49:21.740382 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-02 19:49:21.741049 | orchestrator | Monday 02 June 2025 19:49:21 +0000 (0:00:00.919) 0:00:11.663 *********** 2025-06-02 19:49:23.395088 | orchestrator | changed: [testbed-manager] 2025-06-02 19:49:23.395685 | orchestrator | 2025-06-02 19:49:23.396728 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-02 19:49:23.398285 | orchestrator | Monday 02 June 2025 19:49:23 +0000 (0:00:01.654) 0:00:13.318 *********** 2025-06-02 19:49:25.298238 | orchestrator | changed: [testbed-manager] 2025-06-02 19:49:25.298475 | orchestrator | 2025-06-02 19:49:25.299898 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:49:25.300120 | orchestrator | 2025-06-02 19:49:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:49:25.300535 | orchestrator | 2025-06-02 19:49:25 | INFO  | Please wait and do not abort execution. 2025-06-02 19:49:25.300999 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:49:25.301607 | orchestrator | 2025-06-02 19:49:25.302200 | orchestrator | 2025-06-02 19:49:25.302642 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:49:25.302989 | orchestrator | Monday 02 June 2025 19:49:25 +0000 (0:00:01.904) 0:00:15.223 *********** 2025-06-02 19:49:25.303555 | orchestrator | =============================================================================== 2025-06-02 19:49:25.303931 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.74s 2025-06-02 19:49:25.304435 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.90s 2025-06-02 19:49:25.304759 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2025-06-02 19:49:25.305246 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.24s 2025-06-02 19:49:25.305611 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.16s 2025-06-02 19:49:25.306387 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2025-06-02 19:49:25.306870 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2025-06-02 19:49:25.307160 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-06-02 19:49:25.307679 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.52s 2025-06-02 19:49:25.308111 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-06-02 19:49:25.308592 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2025-06-02 19:49:25.831026 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-02 19:49:25.870731 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-02 19:49:25.870821 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-02 19:49:25.953848 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 168 0 --:--:-- --:--:-- --:--:-- 166 2025-06-02 19:49:25.967234 | orchestrator | + osism apply --environment custom workarounds 2025-06-02 19:49:27.634455 | orchestrator | 2025-06-02 19:49:27 | INFO  | Trying to run play workarounds in environment custom 2025-06-02 19:49:27.639407 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:49:27.639450 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:49:27.639463 | orchestrator | Registering Redlock._release_script 2025-06-02 19:49:27.709842 | orchestrator | 2025-06-02 19:49:27 | INFO  | Task e5ea2a6d-3558-48dd-9804-e40a945077ef (workarounds) was prepared for execution. 2025-06-02 19:49:27.709912 | orchestrator | 2025-06-02 19:49:27 | INFO  | It takes a moment until task e5ea2a6d-3558-48dd-9804-e40a945077ef (workarounds) has been started and output is visible here. 2025-06-02 19:49:31.576276 | orchestrator | 2025-06-02 19:49:31.577154 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 19:49:31.582426 | orchestrator | 2025-06-02 19:49:31.582933 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-02 19:49:31.584426 | orchestrator | Monday 02 June 2025 19:49:31 +0000 (0:00:00.166) 0:00:00.166 *********** 2025-06-02 19:49:31.742843 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-02 19:49:31.823908 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-02 19:49:31.909558 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-02 19:49:31.989667 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-02 19:49:32.191594 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-02 19:49:32.332522 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-02 19:49:32.336920 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-02 19:49:32.336956 | orchestrator | 2025-06-02 19:49:32.336969 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-02 19:49:32.336981 | orchestrator | 2025-06-02 19:49:32.337132 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 19:49:32.337660 | orchestrator | Monday 02 June 2025 19:49:32 +0000 (0:00:00.756) 0:00:00.923 *********** 2025-06-02 19:49:34.777870 | orchestrator | ok: [testbed-manager] 2025-06-02 19:49:34.778355 | orchestrator | 2025-06-02 19:49:34.779399 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-02 19:49:34.779793 | orchestrator | 2025-06-02 19:49:34.780822 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 19:49:34.781546 | orchestrator | Monday 02 June 2025 19:49:34 +0000 (0:00:02.441) 0:00:03.365 *********** 2025-06-02 19:49:36.610418 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:49:36.612235 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:49:36.613176 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:49:36.614402 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:49:36.615387 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:49:36.615790 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:49:36.616394 | orchestrator | 2025-06-02 19:49:36.617029 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-02 19:49:36.617649 | orchestrator | 2025-06-02 19:49:36.618162 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-02 19:49:36.618809 | orchestrator | Monday 02 June 2025 19:49:36 +0000 (0:00:01.832) 0:00:05.197 *********** 2025-06-02 19:49:38.101354 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:49:38.101635 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:49:38.102425 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:49:38.103442 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:49:38.106006 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:49:38.106086 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:49:38.106100 | orchestrator | 2025-06-02 19:49:38.106113 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-02 19:49:38.106670 | orchestrator | Monday 02 June 2025 19:49:38 +0000 (0:00:01.492) 0:00:06.690 *********** 2025-06-02 19:49:41.858942 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:49:41.860064 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:49:41.860219 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:49:41.861986 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:49:41.862620 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:49:41.863445 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:49:41.863966 | orchestrator | 2025-06-02 19:49:41.864580 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-02 19:49:41.864992 | orchestrator | Monday 02 June 2025 19:49:41 +0000 (0:00:03.759) 0:00:10.449 *********** 2025-06-02 19:49:41.992106 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:49:42.061170 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:49:42.128866 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:49:42.196926 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:49:42.442907 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:49:42.443073 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:49:42.446879 | orchestrator | 2025-06-02 19:49:42.447094 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-02 19:49:42.447802 | orchestrator | 2025-06-02 19:49:42.449735 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-02 19:49:42.450466 | orchestrator | Monday 02 June 2025 19:49:42 +0000 (0:00:00.585) 0:00:11.035 *********** 2025-06-02 19:49:44.050722 | orchestrator | changed: [testbed-manager] 2025-06-02 19:49:44.050939 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:49:44.052708 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:49:44.053354 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:49:44.054792 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:49:44.056679 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:49:44.057054 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:49:44.058206 | orchestrator | 2025-06-02 19:49:44.059330 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-02 19:49:44.059860 | orchestrator | Monday 02 June 2025 19:49:44 +0000 (0:00:01.608) 0:00:12.644 *********** 2025-06-02 19:49:45.502902 | orchestrator | changed: [testbed-manager] 2025-06-02 19:49:45.502998 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:49:45.504629 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:49:45.505661 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:49:45.506582 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:49:45.507411 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:49:45.508055 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:49:45.508803 | orchestrator | 2025-06-02 19:49:45.509454 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-02 19:49:45.510106 | orchestrator | Monday 02 June 2025 19:49:45 +0000 (0:00:01.447) 0:00:14.092 *********** 2025-06-02 19:49:46.866386 | orchestrator | ok: [testbed-manager] 2025-06-02 19:49:46.866657 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:49:46.868095 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:49:46.868818 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:49:46.869887 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:49:46.870870 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:49:46.871472 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:49:46.872249 | orchestrator | 2025-06-02 19:49:46.872958 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-02 19:49:46.873769 | orchestrator | Monday 02 June 2025 19:49:46 +0000 (0:00:01.366) 0:00:15.458 *********** 2025-06-02 19:49:48.604300 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:49:48.604583 | orchestrator | changed: [testbed-manager] 2025-06-02 19:49:48.604988 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:49:48.606506 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:49:48.607416 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:49:48.609288 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:49:48.609736 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:49:48.610325 | orchestrator | 2025-06-02 19:49:48.612705 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-02 19:49:48.612972 | orchestrator | Monday 02 June 2025 19:49:48 +0000 (0:00:01.733) 0:00:17.191 *********** 2025-06-02 19:49:48.762935 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:49:48.843712 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:49:48.920761 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:49:48.993124 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:49:49.084784 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:49:49.211269 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:49:49.211596 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:49:49.212500 | orchestrator | 2025-06-02 19:49:49.214170 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-02 19:49:49.214864 | orchestrator | 2025-06-02 19:49:49.214889 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-02 19:49:49.215368 | orchestrator | Monday 02 June 2025 19:49:49 +0000 (0:00:00.610) 0:00:17.801 *********** 2025-06-02 19:49:52.036066 | orchestrator | ok: [testbed-manager] 2025-06-02 19:49:52.036247 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:49:52.036805 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:49:52.037599 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:49:52.041983 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:49:52.042657 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:49:52.043998 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:49:52.044909 | orchestrator | 2025-06-02 19:49:52.046192 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:49:52.046599 | orchestrator | 2025-06-02 19:49:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:49:52.046960 | orchestrator | 2025-06-02 19:49:52 | INFO  | Please wait and do not abort execution. 2025-06-02 19:49:52.047621 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:49:52.048157 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:49:52.048999 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:49:52.050136 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:49:52.050590 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:49:52.050919 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:49:52.051886 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:49:52.053294 | orchestrator | 2025-06-02 19:49:52.054242 | orchestrator | 2025-06-02 19:49:52.055956 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:49:52.056961 | orchestrator | Monday 02 June 2025 19:49:52 +0000 (0:00:02.824) 0:00:20.626 *********** 2025-06-02 19:49:52.057912 | orchestrator | =============================================================================== 2025-06-02 19:49:52.058873 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.76s 2025-06-02 19:49:52.059431 | orchestrator | Install python3-docker -------------------------------------------------- 2.82s 2025-06-02 19:49:52.060324 | orchestrator | Apply netplan configuration --------------------------------------------- 2.44s 2025-06-02 19:49:52.060954 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2025-06-02 19:49:52.062363 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.73s 2025-06-02 19:49:52.062819 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2025-06-02 19:49:52.063298 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2025-06-02 19:49:52.063767 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.45s 2025-06-02 19:49:52.064532 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.37s 2025-06-02 19:49:52.065440 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2025-06-02 19:49:52.065874 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-06-02 19:49:52.066376 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.59s 2025-06-02 19:49:52.621046 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-02 19:49:54.286692 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:49:54.286816 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:49:54.286829 | orchestrator | Registering Redlock._release_script 2025-06-02 19:49:54.346701 | orchestrator | 2025-06-02 19:49:54 | INFO  | Task 1293fa2a-28ed-492d-ba66-5ad230f895ee (reboot) was prepared for execution. 2025-06-02 19:49:54.346816 | orchestrator | 2025-06-02 19:49:54 | INFO  | It takes a moment until task 1293fa2a-28ed-492d-ba66-5ad230f895ee (reboot) has been started and output is visible here. 2025-06-02 19:49:58.243835 | orchestrator | 2025-06-02 19:49:58.245938 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:49:58.245971 | orchestrator | 2025-06-02 19:49:58.246011 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:49:58.246371 | orchestrator | Monday 02 June 2025 19:49:58 +0000 (0:00:00.209) 0:00:00.209 *********** 2025-06-02 19:49:58.337568 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:49:58.341953 | orchestrator | 2025-06-02 19:49:58.343312 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:49:58.345580 | orchestrator | Monday 02 June 2025 19:49:58 +0000 (0:00:00.096) 0:00:00.306 *********** 2025-06-02 19:49:59.249172 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:49:59.249420 | orchestrator | 2025-06-02 19:49:59.250322 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:49:59.253105 | orchestrator | Monday 02 June 2025 19:49:59 +0000 (0:00:00.911) 0:00:01.218 *********** 2025-06-02 19:49:59.366070 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:49:59.367029 | orchestrator | 2025-06-02 19:49:59.369421 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:49:59.369980 | orchestrator | 2025-06-02 19:49:59.370571 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:49:59.370976 | orchestrator | Monday 02 June 2025 19:49:59 +0000 (0:00:00.117) 0:00:01.335 *********** 2025-06-02 19:49:59.477984 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:49:59.478712 | orchestrator | 2025-06-02 19:49:59.479442 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:49:59.482407 | orchestrator | Monday 02 June 2025 19:49:59 +0000 (0:00:00.111) 0:00:01.447 *********** 2025-06-02 19:50:00.149808 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:50:00.149910 | orchestrator | 2025-06-02 19:50:00.150692 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:50:00.150899 | orchestrator | Monday 02 June 2025 19:50:00 +0000 (0:00:00.671) 0:00:02.119 *********** 2025-06-02 19:50:00.262286 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:50:00.263237 | orchestrator | 2025-06-02 19:50:00.263881 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:50:00.265875 | orchestrator | 2025-06-02 19:50:00.266627 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:50:00.267072 | orchestrator | Monday 02 June 2025 19:50:00 +0000 (0:00:00.112) 0:00:02.231 *********** 2025-06-02 19:50:00.451393 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:50:00.451827 | orchestrator | 2025-06-02 19:50:00.452568 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:50:00.453169 | orchestrator | Monday 02 June 2025 19:50:00 +0000 (0:00:00.189) 0:00:02.421 *********** 2025-06-02 19:50:01.096651 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:50:01.097170 | orchestrator | 2025-06-02 19:50:01.098381 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:50:01.098889 | orchestrator | Monday 02 June 2025 19:50:01 +0000 (0:00:00.644) 0:00:03.065 *********** 2025-06-02 19:50:01.227214 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:50:01.227945 | orchestrator | 2025-06-02 19:50:01.229180 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:50:01.229919 | orchestrator | 2025-06-02 19:50:01.231162 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:50:01.232033 | orchestrator | Monday 02 June 2025 19:50:01 +0000 (0:00:00.128) 0:00:03.194 *********** 2025-06-02 19:50:01.324821 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:50:01.325360 | orchestrator | 2025-06-02 19:50:01.325966 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:50:01.327704 | orchestrator | Monday 02 June 2025 19:50:01 +0000 (0:00:00.099) 0:00:03.293 *********** 2025-06-02 19:50:01.959274 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:50:01.959444 | orchestrator | 2025-06-02 19:50:01.961714 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:50:01.962814 | orchestrator | Monday 02 June 2025 19:50:01 +0000 (0:00:00.633) 0:00:03.927 *********** 2025-06-02 19:50:02.074185 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:50:02.074360 | orchestrator | 2025-06-02 19:50:02.075061 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:50:02.075997 | orchestrator | 2025-06-02 19:50:02.076340 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:50:02.079863 | orchestrator | Monday 02 June 2025 19:50:02 +0000 (0:00:00.113) 0:00:04.041 *********** 2025-06-02 19:50:02.168530 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:50:02.168776 | orchestrator | 2025-06-02 19:50:02.170824 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:50:02.173863 | orchestrator | Monday 02 June 2025 19:50:02 +0000 (0:00:00.096) 0:00:04.137 *********** 2025-06-02 19:50:02.821741 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:50:02.821991 | orchestrator | 2025-06-02 19:50:02.822920 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:50:02.823640 | orchestrator | Monday 02 June 2025 19:50:02 +0000 (0:00:00.650) 0:00:04.788 *********** 2025-06-02 19:50:02.929035 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:50:02.929753 | orchestrator | 2025-06-02 19:50:02.930714 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:50:02.931715 | orchestrator | 2025-06-02 19:50:02.932468 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:50:02.933067 | orchestrator | Monday 02 June 2025 19:50:02 +0000 (0:00:00.107) 0:00:04.896 *********** 2025-06-02 19:50:03.027104 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:50:03.027403 | orchestrator | 2025-06-02 19:50:03.028591 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:50:03.030557 | orchestrator | Monday 02 June 2025 19:50:03 +0000 (0:00:00.100) 0:00:04.996 *********** 2025-06-02 19:50:03.687380 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:50:03.687775 | orchestrator | 2025-06-02 19:50:03.689951 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:50:03.691902 | orchestrator | Monday 02 June 2025 19:50:03 +0000 (0:00:00.658) 0:00:05.655 *********** 2025-06-02 19:50:03.718616 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:50:03.719659 | orchestrator | 2025-06-02 19:50:03.720433 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:50:03.720806 | orchestrator | 2025-06-02 19:50:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:50:03.721114 | orchestrator | 2025-06-02 19:50:03 | INFO  | Please wait and do not abort execution. 2025-06-02 19:50:03.722237 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:50:03.722707 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:50:03.723519 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:50:03.723999 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:50:03.724544 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:50:03.724934 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:50:03.725747 | orchestrator | 2025-06-02 19:50:03.726565 | orchestrator | 2025-06-02 19:50:03.726938 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:50:03.727402 | orchestrator | Monday 02 June 2025 19:50:03 +0000 (0:00:00.033) 0:00:05.688 *********** 2025-06-02 19:50:03.727798 | orchestrator | =============================================================================== 2025-06-02 19:50:03.728384 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.17s 2025-06-02 19:50:03.729196 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.69s 2025-06-02 19:50:03.729821 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.61s 2025-06-02 19:50:04.269721 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-02 19:50:05.915273 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:50:05.915408 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:50:05.915422 | orchestrator | Registering Redlock._release_script 2025-06-02 19:50:05.987093 | orchestrator | 2025-06-02 19:50:05 | INFO  | Task 8c22a045-c2c9-4bfc-a751-98ab1feb43a0 (wait-for-connection) was prepared for execution. 2025-06-02 19:50:05.987189 | orchestrator | 2025-06-02 19:50:05 | INFO  | It takes a moment until task 8c22a045-c2c9-4bfc-a751-98ab1feb43a0 (wait-for-connection) has been started and output is visible here. 2025-06-02 19:50:09.926232 | orchestrator | 2025-06-02 19:50:09.926628 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-02 19:50:09.927283 | orchestrator | 2025-06-02 19:50:09.929668 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-02 19:50:09.930518 | orchestrator | Monday 02 June 2025 19:50:09 +0000 (0:00:00.235) 0:00:00.235 *********** 2025-06-02 19:50:22.435787 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:50:22.435905 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:50:22.435950 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:50:22.436349 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:50:22.437322 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:50:22.438870 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:50:22.439303 | orchestrator | 2025-06-02 19:50:22.439867 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:50:22.440277 | orchestrator | 2025-06-02 19:50:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:50:22.440500 | orchestrator | 2025-06-02 19:50:22 | INFO  | Please wait and do not abort execution. 2025-06-02 19:50:22.441617 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:50:22.442888 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:50:22.443544 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:50:22.443746 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:50:22.444743 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:50:22.445992 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:50:22.446960 | orchestrator | 2025-06-02 19:50:22.447575 | orchestrator | 2025-06-02 19:50:22.448181 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:50:22.449190 | orchestrator | Monday 02 June 2025 19:50:22 +0000 (0:00:12.513) 0:00:12.748 *********** 2025-06-02 19:50:22.449561 | orchestrator | =============================================================================== 2025-06-02 19:50:22.450097 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.51s 2025-06-02 19:50:22.994284 | orchestrator | + osism apply hddtemp 2025-06-02 19:50:24.642412 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:50:24.642548 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:50:24.642564 | orchestrator | Registering Redlock._release_script 2025-06-02 19:50:24.702111 | orchestrator | 2025-06-02 19:50:24 | INFO  | Task 3de9b43a-8670-40b5-a7b1-8188c0eded74 (hddtemp) was prepared for execution. 2025-06-02 19:50:24.702228 | orchestrator | 2025-06-02 19:50:24 | INFO  | It takes a moment until task 3de9b43a-8670-40b5-a7b1-8188c0eded74 (hddtemp) has been started and output is visible here. 2025-06-02 19:50:28.755454 | orchestrator | 2025-06-02 19:50:28.762837 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-02 19:50:28.764200 | orchestrator | 2025-06-02 19:50:28.765339 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-02 19:50:28.768539 | orchestrator | Monday 02 June 2025 19:50:28 +0000 (0:00:00.254) 0:00:00.254 *********** 2025-06-02 19:50:28.929274 | orchestrator | ok: [testbed-manager] 2025-06-02 19:50:29.007278 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:50:29.091061 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:50:29.168525 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:50:29.363458 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:50:29.505411 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:50:29.506873 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:50:29.507620 | orchestrator | 2025-06-02 19:50:29.509351 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-02 19:50:29.510572 | orchestrator | Monday 02 June 2025 19:50:29 +0000 (0:00:00.749) 0:00:01.003 *********** 2025-06-02 19:50:30.678376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:50:30.678807 | orchestrator | 2025-06-02 19:50:30.679380 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-02 19:50:30.680397 | orchestrator | Monday 02 June 2025 19:50:30 +0000 (0:00:01.173) 0:00:02.177 *********** 2025-06-02 19:50:32.555718 | orchestrator | ok: [testbed-manager] 2025-06-02 19:50:32.556546 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:50:32.557871 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:50:32.558139 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:50:32.559464 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:50:32.559888 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:50:32.560661 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:50:32.561172 | orchestrator | 2025-06-02 19:50:32.561632 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-02 19:50:32.562001 | orchestrator | Monday 02 June 2025 19:50:32 +0000 (0:00:01.880) 0:00:04.057 *********** 2025-06-02 19:50:33.211102 | orchestrator | changed: [testbed-manager] 2025-06-02 19:50:33.299072 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:50:33.732908 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:50:33.733382 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:50:33.733766 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:50:33.734547 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:50:33.734787 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:50:33.735923 | orchestrator | 2025-06-02 19:50:33.736659 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-02 19:50:33.739719 | orchestrator | Monday 02 June 2025 19:50:33 +0000 (0:00:01.173) 0:00:05.230 *********** 2025-06-02 19:50:34.848808 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:50:34.849608 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:50:34.852237 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:50:34.852788 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:50:34.854069 | orchestrator | ok: [testbed-manager] 2025-06-02 19:50:34.854898 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:50:34.856326 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:50:34.857336 | orchestrator | 2025-06-02 19:50:34.858447 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-02 19:50:34.859546 | orchestrator | Monday 02 June 2025 19:50:34 +0000 (0:00:01.119) 0:00:06.350 *********** 2025-06-02 19:50:35.283768 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:50:35.361615 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:50:35.434946 | orchestrator | changed: [testbed-manager] 2025-06-02 19:50:35.516207 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:50:35.639555 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:50:35.641692 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:50:35.644795 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:50:35.645538 | orchestrator | 2025-06-02 19:50:35.646429 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-02 19:50:35.647390 | orchestrator | Monday 02 June 2025 19:50:35 +0000 (0:00:00.787) 0:00:07.137 *********** 2025-06-02 19:50:47.980984 | orchestrator | changed: [testbed-manager] 2025-06-02 19:50:47.981102 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:50:47.981117 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:50:47.982591 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:50:47.983803 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:50:47.985272 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:50:47.986014 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:50:47.987129 | orchestrator | 2025-06-02 19:50:47.987969 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-02 19:50:47.988888 | orchestrator | Monday 02 June 2025 19:50:47 +0000 (0:00:12.339) 0:00:19.476 *********** 2025-06-02 19:50:49.328995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:50:49.331065 | orchestrator | 2025-06-02 19:50:49.331127 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-02 19:50:49.331919 | orchestrator | Monday 02 June 2025 19:50:49 +0000 (0:00:01.350) 0:00:20.827 *********** 2025-06-02 19:50:51.125184 | orchestrator | changed: [testbed-manager] 2025-06-02 19:50:51.126605 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:50:51.128893 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:50:51.130123 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:50:51.131309 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:50:51.132501 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:50:51.133819 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:50:51.134915 | orchestrator | 2025-06-02 19:50:51.135956 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:50:51.136434 | orchestrator | 2025-06-02 19:50:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:50:51.137179 | orchestrator | 2025-06-02 19:50:51 | INFO  | Please wait and do not abort execution. 2025-06-02 19:50:51.138650 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:50:51.139576 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:51.140765 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:51.141588 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:51.142302 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:51.144636 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:51.144652 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:51.144659 | orchestrator | 2025-06-02 19:50:51.144666 | orchestrator | 2025-06-02 19:50:51.145542 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:50:51.146254 | orchestrator | Monday 02 June 2025 19:50:51 +0000 (0:00:01.797) 0:00:22.625 *********** 2025-06-02 19:50:51.146945 | orchestrator | =============================================================================== 2025-06-02 19:50:51.147564 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.34s 2025-06-02 19:50:51.148126 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.88s 2025-06-02 19:50:51.148717 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.80s 2025-06-02 19:50:51.149103 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.35s 2025-06-02 19:50:51.149852 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.17s 2025-06-02 19:50:51.150411 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.17s 2025-06-02 19:50:51.150899 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.12s 2025-06-02 19:50:51.151279 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.79s 2025-06-02 19:50:51.151610 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2025-06-02 19:50:51.683708 | orchestrator | ++ semver latest 7.1.1 2025-06-02 19:50:51.733537 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 19:50:51.733629 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 19:50:51.733644 | orchestrator | + sudo systemctl restart manager.service 2025-06-02 19:51:28.668378 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 19:51:28.668549 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 19:51:28.668568 | orchestrator | + local max_attempts=60 2025-06-02 19:51:28.668580 | orchestrator | + local name=ceph-ansible 2025-06-02 19:51:28.668590 | orchestrator | + local attempt_num=1 2025-06-02 19:51:28.668602 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:51:28.697014 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:51:28.697079 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:51:28.697092 | orchestrator | + sleep 5 2025-06-02 19:51:33.706790 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:51:33.746276 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:51:33.746388 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:51:33.746414 | orchestrator | + sleep 5 2025-06-02 19:51:38.749656 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:51:38.793831 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:51:38.793918 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:51:38.793932 | orchestrator | + sleep 5 2025-06-02 19:51:43.798217 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:51:43.833324 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:51:43.833480 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:51:43.833499 | orchestrator | + sleep 5 2025-06-02 19:51:48.837509 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:51:48.874974 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:51:48.875073 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:51:48.875088 | orchestrator | + sleep 5 2025-06-02 19:51:53.878855 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:51:53.915895 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:51:53.916006 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:51:53.916026 | orchestrator | + sleep 5 2025-06-02 19:51:58.920805 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:51:58.955613 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:51:58.955713 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:51:58.955729 | orchestrator | + sleep 5 2025-06-02 19:52:03.961812 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:52:03.998648 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:52:03.998732 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:52:03.998741 | orchestrator | + sleep 5 2025-06-02 19:52:08.998398 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:52:09.022652 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:52:09.022734 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:52:09.022744 | orchestrator | + sleep 5 2025-06-02 19:52:14.026746 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:52:14.068624 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:52:14.068727 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:52:14.068747 | orchestrator | + sleep 5 2025-06-02 19:52:19.072703 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:52:19.109127 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:52:19.109234 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:52:19.109251 | orchestrator | + sleep 5 2025-06-02 19:52:24.113135 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:52:24.150592 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:52:24.150691 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:52:24.150708 | orchestrator | + sleep 5 2025-06-02 19:52:29.155776 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:52:29.194142 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:52:29.194243 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:52:29.194258 | orchestrator | + sleep 5 2025-06-02 19:52:34.199295 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:52:34.242390 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:52:34.242485 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 19:52:34.242498 | orchestrator | + local max_attempts=60 2025-06-02 19:52:34.242507 | orchestrator | + local name=kolla-ansible 2025-06-02 19:52:34.242516 | orchestrator | + local attempt_num=1 2025-06-02 19:52:34.242848 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 19:52:34.273532 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:52:34.273593 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 19:52:34.273605 | orchestrator | + local max_attempts=60 2025-06-02 19:52:34.273617 | orchestrator | + local name=osism-ansible 2025-06-02 19:52:34.273628 | orchestrator | + local attempt_num=1 2025-06-02 19:52:34.274796 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 19:52:34.312142 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:52:34.312242 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 19:52:34.312257 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 19:52:34.469844 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-02 19:52:34.614126 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-02 19:52:34.901202 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-02 19:52:34.901339 | orchestrator | + osism apply gather-facts 2025-06-02 19:52:36.634202 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:52:36.634306 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:52:36.634354 | orchestrator | Registering Redlock._release_script 2025-06-02 19:52:36.700024 | orchestrator | 2025-06-02 19:52:36 | INFO  | Task 8868f985-4c4c-4934-b48d-82c542de9c58 (gather-facts) was prepared for execution. 2025-06-02 19:52:36.700119 | orchestrator | 2025-06-02 19:52:36 | INFO  | It takes a moment until task 8868f985-4c4c-4934-b48d-82c542de9c58 (gather-facts) has been started and output is visible here. 2025-06-02 19:52:40.777969 | orchestrator | 2025-06-02 19:52:40.778128 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 19:52:40.779221 | orchestrator | 2025-06-02 19:52:40.780056 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:52:40.782983 | orchestrator | Monday 02 June 2025 19:52:40 +0000 (0:00:00.222) 0:00:00.222 *********** 2025-06-02 19:52:46.586206 | orchestrator | ok: [testbed-manager] 2025-06-02 19:52:46.586326 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:52:46.586968 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:52:46.586993 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:52:46.587005 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:52:46.588289 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:52:46.589843 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:52:46.590212 | orchestrator | 2025-06-02 19:52:46.591387 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 19:52:46.593931 | orchestrator | 2025-06-02 19:52:46.593958 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 19:52:46.593970 | orchestrator | Monday 02 June 2025 19:52:46 +0000 (0:00:05.813) 0:00:06.036 *********** 2025-06-02 19:52:46.725408 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:52:46.791840 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:52:46.857724 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:52:46.931538 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:52:46.997152 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:52:47.034806 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:52:47.035258 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:52:47.035674 | orchestrator | 2025-06-02 19:52:47.037086 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:52:47.037131 | orchestrator | 2025-06-02 19:52:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:52:47.037146 | orchestrator | 2025-06-02 19:52:47 | INFO  | Please wait and do not abort execution. 2025-06-02 19:52:47.037703 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:52:47.038211 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:52:47.038917 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:52:47.039611 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:52:47.040216 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:52:47.040619 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:52:47.041367 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:52:47.043089 | orchestrator | 2025-06-02 19:52:47.043688 | orchestrator | 2025-06-02 19:52:47.044892 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:52:47.045565 | orchestrator | Monday 02 June 2025 19:52:47 +0000 (0:00:00.450) 0:00:06.486 *********** 2025-06-02 19:52:47.046269 | orchestrator | =============================================================================== 2025-06-02 19:52:47.046984 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.81s 2025-06-02 19:52:47.047485 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2025-06-02 19:52:47.429853 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-02 19:52:47.441488 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-02 19:52:47.453011 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-02 19:52:47.462685 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-02 19:52:47.472729 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-02 19:52:47.482564 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-02 19:52:47.490274 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-02 19:52:47.499264 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-02 19:52:47.508231 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-02 19:52:47.517820 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-02 19:52:47.527501 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-02 19:52:47.537630 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-02 19:52:47.547873 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-02 19:52:47.559514 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-02 19:52:47.577494 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-02 19:52:47.588351 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-02 19:52:47.600200 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-02 19:52:47.614325 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-02 19:52:47.623961 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-02 19:52:47.632236 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-02 19:52:47.640567 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-02 19:52:47.749218 | orchestrator | ok: Runtime: 0:20:21.703838 2025-06-02 19:52:47.847420 | 2025-06-02 19:52:47.847559 | TASK [Deploy services] 2025-06-02 19:52:48.391902 | orchestrator | skipping: Conditional result was False 2025-06-02 19:52:48.410331 | 2025-06-02 19:52:48.410501 | TASK [Deploy in a nutshell] 2025-06-02 19:52:49.112249 | orchestrator | + set -e 2025-06-02 19:52:49.112397 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 19:52:49.112414 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 19:52:49.112441 | orchestrator | ++ INTERACTIVE=false 2025-06-02 19:52:49.112450 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 19:52:49.112457 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 19:52:49.112475 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 19:52:49.112504 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 19:52:49.112520 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 19:52:49.112528 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 19:52:49.112537 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 19:52:49.112541 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 19:52:49.112548 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 19:52:49.112552 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 19:52:49.112561 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 19:52:49.112565 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 19:52:49.112571 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 19:52:49.112575 | orchestrator | ++ export ARA=false 2025-06-02 19:52:49.112579 | orchestrator | ++ ARA=false 2025-06-02 19:52:49.112583 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 19:52:49.112588 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 19:52:49.112595 | orchestrator | ++ export TEMPEST=false 2025-06-02 19:52:49.112601 | orchestrator | ++ TEMPEST=false 2025-06-02 19:52:49.112607 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 19:52:49.112613 | orchestrator | ++ IS_ZUUL=true 2025-06-02 19:52:49.112619 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 19:52:49.112627 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 19:52:49.112633 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 19:52:49.112640 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 19:52:49.112645 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 19:52:49.112651 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 19:52:49.112658 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 19:52:49.112663 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 19:52:49.112670 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 19:52:49.112680 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 19:52:49.112687 | orchestrator | + echo 2025-06-02 19:52:49.112693 | orchestrator | 2025-06-02 19:52:49.112700 | orchestrator | # PULL IMAGES 2025-06-02 19:52:49.112705 | orchestrator | 2025-06-02 19:52:49.112736 | orchestrator | + echo '# PULL IMAGES' 2025-06-02 19:52:49.112744 | orchestrator | + echo 2025-06-02 19:52:49.113733 | orchestrator | ++ semver latest 7.0.0 2025-06-02 19:52:49.157512 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 19:52:49.157580 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 19:52:49.157599 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-02 19:52:50.621800 | orchestrator | 2025-06-02 19:52:50 | INFO  | Trying to run play pull-images in environment custom 2025-06-02 19:52:50.626748 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:52:50.626819 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:52:50.626833 | orchestrator | Registering Redlock._release_script 2025-06-02 19:52:50.683532 | orchestrator | 2025-06-02 19:52:50 | INFO  | Task 9e86dfca-5861-4067-a3be-4345b1bf4d35 (pull-images) was prepared for execution. 2025-06-02 19:52:50.683658 | orchestrator | 2025-06-02 19:52:50 | INFO  | It takes a moment until task 9e86dfca-5861-4067-a3be-4345b1bf4d35 (pull-images) has been started and output is visible here. 2025-06-02 19:52:54.527179 | orchestrator | 2025-06-02 19:52:54.528891 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-02 19:52:54.532070 | orchestrator | 2025-06-02 19:52:54.532325 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-02 19:52:54.533904 | orchestrator | Monday 02 June 2025 19:52:54 +0000 (0:00:00.161) 0:00:00.161 *********** 2025-06-02 19:53:58.417914 | orchestrator | changed: [testbed-manager] 2025-06-02 19:53:58.418211 | orchestrator | 2025-06-02 19:53:58.419164 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-02 19:53:58.420167 | orchestrator | Monday 02 June 2025 19:53:58 +0000 (0:01:03.891) 0:01:04.053 *********** 2025-06-02 19:54:50.440096 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-02 19:54:50.440214 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-02 19:54:50.442284 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-02 19:54:50.444898 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-02 19:54:50.445590 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-02 19:54:50.445745 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-02 19:54:50.446578 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-02 19:54:50.447214 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-02 19:54:50.447711 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-02 19:54:50.448335 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-02 19:54:50.448738 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-02 19:54:50.448870 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-02 19:54:50.449376 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-02 19:54:50.449571 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-02 19:54:50.449907 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-02 19:54:50.450266 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-02 19:54:50.450667 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-02 19:54:50.451030 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-02 19:54:50.452579 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-02 19:54:50.453996 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-02 19:54:50.454079 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-02 19:54:50.454625 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-02 19:54:50.455200 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-02 19:54:50.456758 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-02 19:54:50.456788 | orchestrator | 2025-06-02 19:54:50.456801 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:54:50.457077 | orchestrator | 2025-06-02 19:54:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:54:50.457338 | orchestrator | 2025-06-02 19:54:50 | INFO  | Please wait and do not abort execution. 2025-06-02 19:54:50.458288 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:54:50.458311 | orchestrator | 2025-06-02 19:54:50.459059 | orchestrator | 2025-06-02 19:54:50.460166 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:54:50.460536 | orchestrator | Monday 02 June 2025 19:54:50 +0000 (0:00:52.023) 0:01:56.076 *********** 2025-06-02 19:54:50.461101 | orchestrator | =============================================================================== 2025-06-02 19:54:50.461631 | orchestrator | Pull keystone image ---------------------------------------------------- 63.89s 2025-06-02 19:54:50.462074 | orchestrator | Pull other images ------------------------------------------------------ 52.02s 2025-06-02 19:54:52.427110 | orchestrator | 2025-06-02 19:54:52 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-02 19:54:52.429881 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:54:52.429911 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:54:52.429923 | orchestrator | Registering Redlock._release_script 2025-06-02 19:54:52.483699 | orchestrator | 2025-06-02 19:54:52 | INFO  | Task ce24ba1f-4e5f-42f7-a88b-6aa2f6c389f8 (wipe-partitions) was prepared for execution. 2025-06-02 19:54:52.483794 | orchestrator | 2025-06-02 19:54:52 | INFO  | It takes a moment until task ce24ba1f-4e5f-42f7-a88b-6aa2f6c389f8 (wipe-partitions) has been started and output is visible here. 2025-06-02 19:54:55.978962 | orchestrator | 2025-06-02 19:54:55.979084 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-02 19:54:55.979514 | orchestrator | 2025-06-02 19:54:55.981800 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-02 19:54:55.982095 | orchestrator | Monday 02 June 2025 19:54:55 +0000 (0:00:00.137) 0:00:00.137 *********** 2025-06-02 19:54:56.655571 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:54:56.657946 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:54:56.657985 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:54:56.658205 | orchestrator | 2025-06-02 19:54:56.658546 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-02 19:54:56.658750 | orchestrator | Monday 02 June 2025 19:54:56 +0000 (0:00:00.680) 0:00:00.817 *********** 2025-06-02 19:54:56.819727 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:56.933112 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:56.933190 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:54:56.933204 | orchestrator | 2025-06-02 19:54:56.933218 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-02 19:54:56.933230 | orchestrator | Monday 02 June 2025 19:54:56 +0000 (0:00:00.272) 0:00:01.090 *********** 2025-06-02 19:54:57.797596 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:54:57.797717 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:54:57.800860 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:54:57.800908 | orchestrator | 2025-06-02 19:54:57.800922 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-02 19:54:57.800935 | orchestrator | Monday 02 June 2025 19:54:57 +0000 (0:00:00.869) 0:00:01.959 *********** 2025-06-02 19:54:57.940634 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:58.025592 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:58.025694 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:54:58.025771 | orchestrator | 2025-06-02 19:54:58.025862 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-02 19:54:58.026131 | orchestrator | Monday 02 June 2025 19:54:58 +0000 (0:00:00.227) 0:00:02.187 *********** 2025-06-02 19:54:59.212865 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 19:54:59.214342 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 19:54:59.214375 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 19:54:59.214695 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 19:54:59.215694 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 19:54:59.215803 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 19:54:59.216116 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 19:54:59.216314 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 19:54:59.216609 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 19:54:59.216904 | orchestrator | 2025-06-02 19:54:59.217322 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-02 19:54:59.218683 | orchestrator | Monday 02 June 2025 19:54:59 +0000 (0:00:01.186) 0:00:03.373 *********** 2025-06-02 19:55:00.492818 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 19:55:00.493937 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 19:55:00.494293 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 19:55:00.494724 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 19:55:00.495065 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 19:55:00.495597 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 19:55:00.496035 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 19:55:00.496466 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 19:55:00.496784 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 19:55:00.498827 | orchestrator | 2025-06-02 19:55:00.498967 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-02 19:55:00.499205 | orchestrator | Monday 02 June 2025 19:55:00 +0000 (0:00:01.279) 0:00:04.653 *********** 2025-06-02 19:55:02.636811 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 19:55:02.637028 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 19:55:02.638147 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 19:55:02.638980 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 19:55:02.639965 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 19:55:02.641684 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 19:55:02.641896 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 19:55:02.642589 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 19:55:02.644227 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 19:55:02.644267 | orchestrator | 2025-06-02 19:55:02.645061 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-02 19:55:02.645143 | orchestrator | Monday 02 June 2025 19:55:02 +0000 (0:00:02.145) 0:00:06.798 *********** 2025-06-02 19:55:03.205581 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:55:03.206112 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:55:03.206195 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:55:03.207615 | orchestrator | 2025-06-02 19:55:03.211014 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-02 19:55:03.214354 | orchestrator | Monday 02 June 2025 19:55:03 +0000 (0:00:00.567) 0:00:07.366 *********** 2025-06-02 19:55:03.804823 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:55:03.804922 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:55:03.805470 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:55:03.805878 | orchestrator | 2025-06-02 19:55:03.806548 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:55:03.806807 | orchestrator | 2025-06-02 19:55:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:55:03.807634 | orchestrator | 2025-06-02 19:55:03 | INFO  | Please wait and do not abort execution. 2025-06-02 19:55:03.808550 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:03.809001 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:03.809785 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:03.810149 | orchestrator | 2025-06-02 19:55:03.810785 | orchestrator | 2025-06-02 19:55:03.811420 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:55:03.812759 | orchestrator | Monday 02 June 2025 19:55:03 +0000 (0:00:00.596) 0:00:07.963 *********** 2025-06-02 19:55:03.813170 | orchestrator | =============================================================================== 2025-06-02 19:55:03.813857 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.15s 2025-06-02 19:55:03.813994 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.28s 2025-06-02 19:55:03.814667 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2025-06-02 19:55:03.815011 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.87s 2025-06-02 19:55:03.815884 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.68s 2025-06-02 19:55:03.816202 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-06-02 19:55:03.816625 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2025-06-02 19:55:03.817071 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-06-02 19:55:03.817589 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-06-02 19:55:05.709510 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:55:05.709755 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:55:05.709778 | orchestrator | Registering Redlock._release_script 2025-06-02 19:55:05.761598 | orchestrator | 2025-06-02 19:55:05 | INFO  | Task f95e9119-be7d-4f78-8cb6-bfa489d834ef (facts) was prepared for execution. 2025-06-02 19:55:05.761707 | orchestrator | 2025-06-02 19:55:05 | INFO  | It takes a moment until task f95e9119-be7d-4f78-8cb6-bfa489d834ef (facts) has been started and output is visible here. 2025-06-02 19:55:09.397992 | orchestrator | 2025-06-02 19:55:09.399672 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 19:55:09.400712 | orchestrator | 2025-06-02 19:55:09.402264 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 19:55:09.403574 | orchestrator | Monday 02 June 2025 19:55:09 +0000 (0:00:00.252) 0:00:00.252 *********** 2025-06-02 19:55:10.571295 | orchestrator | ok: [testbed-manager] 2025-06-02 19:55:10.574570 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:55:10.574679 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:55:10.574934 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:55:10.575331 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:55:10.575908 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:10.576656 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:10.576684 | orchestrator | 2025-06-02 19:55:10.577127 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 19:55:10.577886 | orchestrator | Monday 02 June 2025 19:55:10 +0000 (0:00:01.166) 0:00:01.419 *********** 2025-06-02 19:55:10.728173 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:55:10.811490 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:55:10.892658 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:55:10.969781 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:55:11.048130 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:11.773142 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:11.773243 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:11.777330 | orchestrator | 2025-06-02 19:55:11.777493 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 19:55:11.777512 | orchestrator | 2025-06-02 19:55:11.778723 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:55:11.783118 | orchestrator | Monday 02 June 2025 19:55:11 +0000 (0:00:01.207) 0:00:02.626 *********** 2025-06-02 19:55:16.728245 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:55:16.728573 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:55:16.729178 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:55:16.730268 | orchestrator | ok: [testbed-manager] 2025-06-02 19:55:16.731080 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:55:16.732385 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:16.733184 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:16.734305 | orchestrator | 2025-06-02 19:55:16.734702 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 19:55:16.736293 | orchestrator | 2025-06-02 19:55:16.737465 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 19:55:16.740082 | orchestrator | Monday 02 June 2025 19:55:16 +0000 (0:00:04.956) 0:00:07.583 *********** 2025-06-02 19:55:16.883830 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:55:16.963044 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:55:17.040289 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:55:17.118795 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:55:17.196535 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:17.246946 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:17.247806 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:17.248027 | orchestrator | 2025-06-02 19:55:17.248702 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:55:17.249137 | orchestrator | 2025-06-02 19:55:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:55:17.249738 | orchestrator | 2025-06-02 19:55:17 | INFO  | Please wait and do not abort execution. 2025-06-02 19:55:17.249952 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:17.250595 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:17.251028 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:17.251355 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:17.251992 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:17.252391 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:17.253423 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:17.253507 | orchestrator | 2025-06-02 19:55:17.253924 | orchestrator | 2025-06-02 19:55:17.254270 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:55:17.254888 | orchestrator | Monday 02 June 2025 19:55:17 +0000 (0:00:00.519) 0:00:08.103 *********** 2025-06-02 19:55:17.255292 | orchestrator | =============================================================================== 2025-06-02 19:55:17.256126 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.96s 2025-06-02 19:55:17.256305 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2025-06-02 19:55:17.256689 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2025-06-02 19:55:17.257007 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-06-02 19:55:19.794410 | orchestrator | 2025-06-02 19:55:19 | INFO  | Task d009c7a4-cc7e-4017-b0bf-0e1d51ca396b (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-02 19:55:19.794568 | orchestrator | 2025-06-02 19:55:19 | INFO  | It takes a moment until task d009c7a4-cc7e-4017-b0bf-0e1d51ca396b (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-02 19:55:24.339368 | orchestrator | 2025-06-02 19:55:24.339592 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 19:55:24.341783 | orchestrator | 2025-06-02 19:55:24.343527 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:55:24.344831 | orchestrator | Monday 02 June 2025 19:55:24 +0000 (0:00:00.314) 0:00:00.314 *********** 2025-06-02 19:55:24.582943 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 19:55:24.586144 | orchestrator | 2025-06-02 19:55:24.586194 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:55:24.586210 | orchestrator | Monday 02 June 2025 19:55:24 +0000 (0:00:00.246) 0:00:00.560 *********** 2025-06-02 19:55:24.826263 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:55:24.826630 | orchestrator | 2025-06-02 19:55:24.827855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:24.828939 | orchestrator | Monday 02 June 2025 19:55:24 +0000 (0:00:00.242) 0:00:00.803 *********** 2025-06-02 19:55:25.240753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 19:55:25.241170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 19:55:25.242409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 19:55:25.244740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 19:55:25.244788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 19:55:25.244845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 19:55:25.245799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 19:55:25.246190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 19:55:25.247505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 19:55:25.247668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 19:55:25.248351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 19:55:25.249637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 19:55:25.250084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 19:55:25.250610 | orchestrator | 2025-06-02 19:55:25.251025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:25.252166 | orchestrator | Monday 02 June 2025 19:55:25 +0000 (0:00:00.412) 0:00:01.215 *********** 2025-06-02 19:55:25.775020 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:25.775643 | orchestrator | 2025-06-02 19:55:25.776726 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:25.777060 | orchestrator | Monday 02 June 2025 19:55:25 +0000 (0:00:00.537) 0:00:01.753 *********** 2025-06-02 19:55:25.979926 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:25.981160 | orchestrator | 2025-06-02 19:55:25.981219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:25.981682 | orchestrator | Monday 02 June 2025 19:55:25 +0000 (0:00:00.205) 0:00:01.958 *********** 2025-06-02 19:55:26.194267 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:26.196147 | orchestrator | 2025-06-02 19:55:26.197119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:26.198255 | orchestrator | Monday 02 June 2025 19:55:26 +0000 (0:00:00.212) 0:00:02.170 *********** 2025-06-02 19:55:26.376642 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:26.376750 | orchestrator | 2025-06-02 19:55:26.377765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:26.378772 | orchestrator | Monday 02 June 2025 19:55:26 +0000 (0:00:00.183) 0:00:02.354 *********** 2025-06-02 19:55:26.564285 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:26.565131 | orchestrator | 2025-06-02 19:55:26.566211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:26.567025 | orchestrator | Monday 02 June 2025 19:55:26 +0000 (0:00:00.188) 0:00:02.542 *********** 2025-06-02 19:55:26.756509 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:26.757177 | orchestrator | 2025-06-02 19:55:26.757213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:26.757762 | orchestrator | Monday 02 June 2025 19:55:26 +0000 (0:00:00.192) 0:00:02.735 *********** 2025-06-02 19:55:26.949077 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:26.949988 | orchestrator | 2025-06-02 19:55:26.950312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:26.952347 | orchestrator | Monday 02 June 2025 19:55:26 +0000 (0:00:00.192) 0:00:02.927 *********** 2025-06-02 19:55:27.146852 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:27.147146 | orchestrator | 2025-06-02 19:55:27.147705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:27.149161 | orchestrator | Monday 02 June 2025 19:55:27 +0000 (0:00:00.196) 0:00:03.124 *********** 2025-06-02 19:55:27.569667 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80) 2025-06-02 19:55:27.569962 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80) 2025-06-02 19:55:27.570344 | orchestrator | 2025-06-02 19:55:27.571069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:27.572702 | orchestrator | Monday 02 June 2025 19:55:27 +0000 (0:00:00.423) 0:00:03.547 *********** 2025-06-02 19:55:27.966292 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250) 2025-06-02 19:55:27.966392 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250) 2025-06-02 19:55:27.966921 | orchestrator | 2025-06-02 19:55:27.968706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:27.969096 | orchestrator | Monday 02 June 2025 19:55:27 +0000 (0:00:00.396) 0:00:03.943 *********** 2025-06-02 19:55:28.866224 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773) 2025-06-02 19:55:28.866407 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773) 2025-06-02 19:55:28.866741 | orchestrator | 2025-06-02 19:55:28.867387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:28.867974 | orchestrator | Monday 02 June 2025 19:55:28 +0000 (0:00:00.899) 0:00:04.842 *********** 2025-06-02 19:55:29.574623 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335) 2025-06-02 19:55:29.574753 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335) 2025-06-02 19:55:29.578861 | orchestrator | 2025-06-02 19:55:29.580637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:29.582188 | orchestrator | Monday 02 June 2025 19:55:29 +0000 (0:00:00.706) 0:00:05.549 *********** 2025-06-02 19:55:30.308243 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:55:30.311766 | orchestrator | 2025-06-02 19:55:30.313095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:30.313522 | orchestrator | Monday 02 June 2025 19:55:30 +0000 (0:00:00.734) 0:00:06.284 *********** 2025-06-02 19:55:30.667161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 19:55:30.668633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 19:55:30.669133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 19:55:30.673343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 19:55:30.674554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 19:55:30.675637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 19:55:30.675964 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 19:55:30.677004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 19:55:30.677630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 19:55:30.678319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 19:55:30.678913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 19:55:30.679409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 19:55:30.680348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 19:55:30.684180 | orchestrator | 2025-06-02 19:55:30.684805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:30.685456 | orchestrator | Monday 02 June 2025 19:55:30 +0000 (0:00:00.361) 0:00:06.645 *********** 2025-06-02 19:55:30.867401 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:30.870222 | orchestrator | 2025-06-02 19:55:30.870254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:30.870267 | orchestrator | Monday 02 June 2025 19:55:30 +0000 (0:00:00.200) 0:00:06.846 *********** 2025-06-02 19:55:31.062578 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:31.064495 | orchestrator | 2025-06-02 19:55:31.065211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:31.067755 | orchestrator | Monday 02 June 2025 19:55:31 +0000 (0:00:00.195) 0:00:07.041 *********** 2025-06-02 19:55:31.267221 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:31.267545 | orchestrator | 2025-06-02 19:55:31.268618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:31.269162 | orchestrator | Monday 02 June 2025 19:55:31 +0000 (0:00:00.203) 0:00:07.244 *********** 2025-06-02 19:55:31.477097 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:31.482002 | orchestrator | 2025-06-02 19:55:31.483052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:31.484677 | orchestrator | Monday 02 June 2025 19:55:31 +0000 (0:00:00.210) 0:00:07.454 *********** 2025-06-02 19:55:31.685165 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:31.685663 | orchestrator | 2025-06-02 19:55:31.687915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:31.688849 | orchestrator | Monday 02 June 2025 19:55:31 +0000 (0:00:00.207) 0:00:07.662 *********** 2025-06-02 19:55:31.882313 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:31.882814 | orchestrator | 2025-06-02 19:55:31.887025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:31.890978 | orchestrator | Monday 02 June 2025 19:55:31 +0000 (0:00:00.197) 0:00:07.860 *********** 2025-06-02 19:55:32.128452 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:32.128562 | orchestrator | 2025-06-02 19:55:32.128578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:32.128591 | orchestrator | Monday 02 June 2025 19:55:32 +0000 (0:00:00.242) 0:00:08.102 *********** 2025-06-02 19:55:32.344068 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:32.344191 | orchestrator | 2025-06-02 19:55:32.344490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:32.344598 | orchestrator | Monday 02 June 2025 19:55:32 +0000 (0:00:00.216) 0:00:08.319 *********** 2025-06-02 19:55:33.344077 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 19:55:33.345054 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 19:55:33.345951 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 19:55:33.347062 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 19:55:33.347348 | orchestrator | 2025-06-02 19:55:33.348659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:33.350837 | orchestrator | Monday 02 June 2025 19:55:33 +0000 (0:00:01.003) 0:00:09.323 *********** 2025-06-02 19:55:33.539002 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:33.540850 | orchestrator | 2025-06-02 19:55:33.540906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:33.540923 | orchestrator | Monday 02 June 2025 19:55:33 +0000 (0:00:00.193) 0:00:09.516 *********** 2025-06-02 19:55:33.728484 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:33.729361 | orchestrator | 2025-06-02 19:55:33.730607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:33.731604 | orchestrator | Monday 02 June 2025 19:55:33 +0000 (0:00:00.188) 0:00:09.705 *********** 2025-06-02 19:55:33.927635 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:33.930859 | orchestrator | 2025-06-02 19:55:33.933562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:33.934541 | orchestrator | Monday 02 June 2025 19:55:33 +0000 (0:00:00.196) 0:00:09.901 *********** 2025-06-02 19:55:34.136848 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:34.139816 | orchestrator | 2025-06-02 19:55:34.140574 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 19:55:34.141355 | orchestrator | Monday 02 June 2025 19:55:34 +0000 (0:00:00.211) 0:00:10.112 *********** 2025-06-02 19:55:34.331805 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-02 19:55:34.332070 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-02 19:55:34.332567 | orchestrator | 2025-06-02 19:55:34.333332 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 19:55:34.333921 | orchestrator | Monday 02 June 2025 19:55:34 +0000 (0:00:00.195) 0:00:10.308 *********** 2025-06-02 19:55:34.474312 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:34.474403 | orchestrator | 2025-06-02 19:55:34.474415 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 19:55:34.475104 | orchestrator | Monday 02 June 2025 19:55:34 +0000 (0:00:00.143) 0:00:10.451 *********** 2025-06-02 19:55:34.631554 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:34.633894 | orchestrator | 2025-06-02 19:55:34.637864 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 19:55:34.637898 | orchestrator | Monday 02 June 2025 19:55:34 +0000 (0:00:00.157) 0:00:10.609 *********** 2025-06-02 19:55:34.776351 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:34.777929 | orchestrator | 2025-06-02 19:55:34.781165 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 19:55:34.781245 | orchestrator | Monday 02 June 2025 19:55:34 +0000 (0:00:00.144) 0:00:10.753 *********** 2025-06-02 19:55:34.920946 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:55:34.922779 | orchestrator | 2025-06-02 19:55:34.924801 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 19:55:34.926101 | orchestrator | Monday 02 June 2025 19:55:34 +0000 (0:00:00.144) 0:00:10.897 *********** 2025-06-02 19:55:35.082306 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5468daec-208d-5ea7-b544-bcde6bebed84'}}) 2025-06-02 19:55:35.086182 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd0ca6db9-1635-53d8-80de-4807c4d987bd'}}) 2025-06-02 19:55:35.086254 | orchestrator | 2025-06-02 19:55:35.086935 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 19:55:35.088263 | orchestrator | Monday 02 June 2025 19:55:35 +0000 (0:00:00.159) 0:00:11.057 *********** 2025-06-02 19:55:35.243799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5468daec-208d-5ea7-b544-bcde6bebed84'}})  2025-06-02 19:55:35.244570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd0ca6db9-1635-53d8-80de-4807c4d987bd'}})  2025-06-02 19:55:35.245194 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:35.246144 | orchestrator | 2025-06-02 19:55:35.247403 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 19:55:35.251134 | orchestrator | Monday 02 June 2025 19:55:35 +0000 (0:00:00.164) 0:00:11.222 *********** 2025-06-02 19:55:35.663955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5468daec-208d-5ea7-b544-bcde6bebed84'}})  2025-06-02 19:55:35.667309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd0ca6db9-1635-53d8-80de-4807c4d987bd'}})  2025-06-02 19:55:35.669539 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:35.670193 | orchestrator | 2025-06-02 19:55:35.671376 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 19:55:35.671500 | orchestrator | Monday 02 June 2025 19:55:35 +0000 (0:00:00.418) 0:00:11.641 *********** 2025-06-02 19:55:35.848750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5468daec-208d-5ea7-b544-bcde6bebed84'}})  2025-06-02 19:55:35.849128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd0ca6db9-1635-53d8-80de-4807c4d987bd'}})  2025-06-02 19:55:35.850231 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:35.851300 | orchestrator | 2025-06-02 19:55:35.851325 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 19:55:35.853580 | orchestrator | Monday 02 June 2025 19:55:35 +0000 (0:00:00.184) 0:00:11.825 *********** 2025-06-02 19:55:35.989996 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:55:35.993710 | orchestrator | 2025-06-02 19:55:35.993835 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 19:55:35.994558 | orchestrator | Monday 02 June 2025 19:55:35 +0000 (0:00:00.142) 0:00:11.968 *********** 2025-06-02 19:55:36.123559 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:55:36.123636 | orchestrator | 2025-06-02 19:55:36.123651 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 19:55:36.123665 | orchestrator | Monday 02 June 2025 19:55:36 +0000 (0:00:00.131) 0:00:12.099 *********** 2025-06-02 19:55:36.260953 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:36.261762 | orchestrator | 2025-06-02 19:55:36.262817 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 19:55:36.263747 | orchestrator | Monday 02 June 2025 19:55:36 +0000 (0:00:00.139) 0:00:12.238 *********** 2025-06-02 19:55:36.388580 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:36.388684 | orchestrator | 2025-06-02 19:55:36.388878 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 19:55:36.392231 | orchestrator | Monday 02 June 2025 19:55:36 +0000 (0:00:00.126) 0:00:12.365 *********** 2025-06-02 19:55:36.553228 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:36.553327 | orchestrator | 2025-06-02 19:55:36.553342 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 19:55:36.553355 | orchestrator | Monday 02 June 2025 19:55:36 +0000 (0:00:00.164) 0:00:12.529 *********** 2025-06-02 19:55:36.696791 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:55:36.698521 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:55:36.701799 | orchestrator |  "sdb": { 2025-06-02 19:55:36.702358 | orchestrator |  "osd_lvm_uuid": "5468daec-208d-5ea7-b544-bcde6bebed84" 2025-06-02 19:55:36.703637 | orchestrator |  }, 2025-06-02 19:55:36.703667 | orchestrator |  "sdc": { 2025-06-02 19:55:36.703687 | orchestrator |  "osd_lvm_uuid": "d0ca6db9-1635-53d8-80de-4807c4d987bd" 2025-06-02 19:55:36.704180 | orchestrator |  } 2025-06-02 19:55:36.704804 | orchestrator |  } 2025-06-02 19:55:36.704962 | orchestrator | } 2025-06-02 19:55:36.705585 | orchestrator | 2025-06-02 19:55:36.706583 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 19:55:36.706607 | orchestrator | Monday 02 June 2025 19:55:36 +0000 (0:00:00.144) 0:00:12.674 *********** 2025-06-02 19:55:36.839417 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:36.839802 | orchestrator | 2025-06-02 19:55:36.841074 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 19:55:36.844112 | orchestrator | Monday 02 June 2025 19:55:36 +0000 (0:00:00.142) 0:00:12.817 *********** 2025-06-02 19:55:36.987904 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:36.988004 | orchestrator | 2025-06-02 19:55:36.988252 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 19:55:36.988879 | orchestrator | Monday 02 June 2025 19:55:36 +0000 (0:00:00.148) 0:00:12.966 *********** 2025-06-02 19:55:37.153932 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:37.155083 | orchestrator | 2025-06-02 19:55:37.156567 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 19:55:37.157376 | orchestrator | Monday 02 June 2025 19:55:37 +0000 (0:00:00.166) 0:00:13.132 *********** 2025-06-02 19:55:37.372154 | orchestrator | changed: [testbed-node-3] => { 2025-06-02 19:55:37.373211 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 19:55:37.378698 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:55:37.378738 | orchestrator |  "sdb": { 2025-06-02 19:55:37.379268 | orchestrator |  "osd_lvm_uuid": "5468daec-208d-5ea7-b544-bcde6bebed84" 2025-06-02 19:55:37.379877 | orchestrator |  }, 2025-06-02 19:55:37.380728 | orchestrator |  "sdc": { 2025-06-02 19:55:37.381875 | orchestrator |  "osd_lvm_uuid": "d0ca6db9-1635-53d8-80de-4807c4d987bd" 2025-06-02 19:55:37.382585 | orchestrator |  } 2025-06-02 19:55:37.382926 | orchestrator |  }, 2025-06-02 19:55:37.383885 | orchestrator |  "lvm_volumes": [ 2025-06-02 19:55:37.384872 | orchestrator |  { 2025-06-02 19:55:37.385186 | orchestrator |  "data": "osd-block-5468daec-208d-5ea7-b544-bcde6bebed84", 2025-06-02 19:55:37.386803 | orchestrator |  "data_vg": "ceph-5468daec-208d-5ea7-b544-bcde6bebed84" 2025-06-02 19:55:37.388825 | orchestrator |  }, 2025-06-02 19:55:37.389171 | orchestrator |  { 2025-06-02 19:55:37.389624 | orchestrator |  "data": "osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd", 2025-06-02 19:55:37.390322 | orchestrator |  "data_vg": "ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd" 2025-06-02 19:55:37.390826 | orchestrator |  } 2025-06-02 19:55:37.391391 | orchestrator |  ] 2025-06-02 19:55:37.391924 | orchestrator |  } 2025-06-02 19:55:37.392382 | orchestrator | } 2025-06-02 19:55:37.393039 | orchestrator | 2025-06-02 19:55:37.393661 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 19:55:37.394332 | orchestrator | Monday 02 June 2025 19:55:37 +0000 (0:00:00.216) 0:00:13.349 *********** 2025-06-02 19:55:39.492291 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 19:55:39.495792 | orchestrator | 2025-06-02 19:55:39.497800 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 19:55:39.501611 | orchestrator | 2025-06-02 19:55:39.502380 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:55:39.503941 | orchestrator | Monday 02 June 2025 19:55:39 +0000 (0:00:02.115) 0:00:15.465 *********** 2025-06-02 19:55:39.770583 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 19:55:39.771027 | orchestrator | 2025-06-02 19:55:39.772629 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:55:39.778275 | orchestrator | Monday 02 June 2025 19:55:39 +0000 (0:00:00.282) 0:00:15.747 *********** 2025-06-02 19:55:40.004652 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:40.005796 | orchestrator | 2025-06-02 19:55:40.007619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:40.012418 | orchestrator | Monday 02 June 2025 19:55:39 +0000 (0:00:00.235) 0:00:15.982 *********** 2025-06-02 19:55:40.416393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 19:55:40.417779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 19:55:40.421914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 19:55:40.423198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 19:55:40.424845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 19:55:40.426225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 19:55:40.427393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 19:55:40.428586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 19:55:40.429600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 19:55:40.431184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 19:55:40.432012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 19:55:40.432922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 19:55:40.433924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 19:55:40.434733 | orchestrator | 2025-06-02 19:55:40.435729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:40.435888 | orchestrator | Monday 02 June 2025 19:55:40 +0000 (0:00:00.410) 0:00:16.393 *********** 2025-06-02 19:55:40.617671 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:40.617817 | orchestrator | 2025-06-02 19:55:40.618998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:40.619929 | orchestrator | Monday 02 June 2025 19:55:40 +0000 (0:00:00.198) 0:00:16.592 *********** 2025-06-02 19:55:40.808677 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:40.809739 | orchestrator | 2025-06-02 19:55:40.811178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:40.812451 | orchestrator | Monday 02 June 2025 19:55:40 +0000 (0:00:00.193) 0:00:16.786 *********** 2025-06-02 19:55:41.001478 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:41.001563 | orchestrator | 2025-06-02 19:55:41.004552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:41.004603 | orchestrator | Monday 02 June 2025 19:55:40 +0000 (0:00:00.193) 0:00:16.979 *********** 2025-06-02 19:55:41.199348 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:41.199615 | orchestrator | 2025-06-02 19:55:41.199945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:41.200589 | orchestrator | Monday 02 June 2025 19:55:41 +0000 (0:00:00.196) 0:00:17.176 *********** 2025-06-02 19:55:41.796466 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:41.799343 | orchestrator | 2025-06-02 19:55:41.800818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:41.804234 | orchestrator | Monday 02 June 2025 19:55:41 +0000 (0:00:00.598) 0:00:17.774 *********** 2025-06-02 19:55:41.989299 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:41.990257 | orchestrator | 2025-06-02 19:55:41.993894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:41.993922 | orchestrator | Monday 02 June 2025 19:55:41 +0000 (0:00:00.191) 0:00:17.966 *********** 2025-06-02 19:55:42.183788 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:42.185107 | orchestrator | 2025-06-02 19:55:42.186646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:42.191839 | orchestrator | Monday 02 June 2025 19:55:42 +0000 (0:00:00.195) 0:00:18.162 *********** 2025-06-02 19:55:42.382311 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:42.384536 | orchestrator | 2025-06-02 19:55:42.387813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:42.387844 | orchestrator | Monday 02 June 2025 19:55:42 +0000 (0:00:00.198) 0:00:18.360 *********** 2025-06-02 19:55:42.799797 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d) 2025-06-02 19:55:42.800004 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d) 2025-06-02 19:55:42.801024 | orchestrator | 2025-06-02 19:55:42.801346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:42.802077 | orchestrator | Monday 02 June 2025 19:55:42 +0000 (0:00:00.414) 0:00:18.774 *********** 2025-06-02 19:55:43.334078 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696) 2025-06-02 19:55:43.336082 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696) 2025-06-02 19:55:43.336645 | orchestrator | 2025-06-02 19:55:43.336853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:43.337489 | orchestrator | Monday 02 June 2025 19:55:43 +0000 (0:00:00.536) 0:00:19.311 *********** 2025-06-02 19:55:43.748670 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4) 2025-06-02 19:55:43.749856 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4) 2025-06-02 19:55:43.750562 | orchestrator | 2025-06-02 19:55:43.750748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:43.751081 | orchestrator | Monday 02 June 2025 19:55:43 +0000 (0:00:00.413) 0:00:19.725 *********** 2025-06-02 19:55:44.152196 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db) 2025-06-02 19:55:44.152321 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db) 2025-06-02 19:55:44.152563 | orchestrator | 2025-06-02 19:55:44.152787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:44.154104 | orchestrator | Monday 02 June 2025 19:55:44 +0000 (0:00:00.406) 0:00:20.131 *********** 2025-06-02 19:55:44.479115 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:55:44.479725 | orchestrator | 2025-06-02 19:55:44.479760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:44.479861 | orchestrator | Monday 02 June 2025 19:55:44 +0000 (0:00:00.326) 0:00:20.458 *********** 2025-06-02 19:55:44.866391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 19:55:44.866636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 19:55:44.870249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 19:55:44.870823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 19:55:44.871911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 19:55:44.872728 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 19:55:44.873115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 19:55:44.874285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 19:55:44.874820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 19:55:44.875523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 19:55:44.876406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 19:55:44.877088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 19:55:44.877761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 19:55:44.878469 | orchestrator | 2025-06-02 19:55:44.879026 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:44.879723 | orchestrator | Monday 02 June 2025 19:55:44 +0000 (0:00:00.384) 0:00:20.842 *********** 2025-06-02 19:55:45.053345 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:45.053549 | orchestrator | 2025-06-02 19:55:45.054213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:45.055412 | orchestrator | Monday 02 June 2025 19:55:45 +0000 (0:00:00.186) 0:00:21.029 *********** 2025-06-02 19:55:45.705636 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:45.708771 | orchestrator | 2025-06-02 19:55:45.709023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:45.711759 | orchestrator | Monday 02 June 2025 19:55:45 +0000 (0:00:00.652) 0:00:21.681 *********** 2025-06-02 19:55:45.911669 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:45.913349 | orchestrator | 2025-06-02 19:55:45.914230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:45.915580 | orchestrator | Monday 02 June 2025 19:55:45 +0000 (0:00:00.207) 0:00:21.889 *********** 2025-06-02 19:55:46.103152 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:46.103259 | orchestrator | 2025-06-02 19:55:46.103271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:46.103662 | orchestrator | Monday 02 June 2025 19:55:46 +0000 (0:00:00.190) 0:00:22.079 *********** 2025-06-02 19:55:46.288801 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:46.290417 | orchestrator | 2025-06-02 19:55:46.291162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:46.291895 | orchestrator | Monday 02 June 2025 19:55:46 +0000 (0:00:00.186) 0:00:22.266 *********** 2025-06-02 19:55:46.486252 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:46.486744 | orchestrator | 2025-06-02 19:55:46.488109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:46.489176 | orchestrator | Monday 02 June 2025 19:55:46 +0000 (0:00:00.197) 0:00:22.463 *********** 2025-06-02 19:55:46.692252 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:46.692519 | orchestrator | 2025-06-02 19:55:46.692603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:46.692866 | orchestrator | Monday 02 June 2025 19:55:46 +0000 (0:00:00.206) 0:00:22.670 *********** 2025-06-02 19:55:46.891211 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:46.891305 | orchestrator | 2025-06-02 19:55:46.892762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:46.893619 | orchestrator | Monday 02 June 2025 19:55:46 +0000 (0:00:00.194) 0:00:22.864 *********** 2025-06-02 19:55:47.535285 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 19:55:47.535919 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 19:55:47.537115 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 19:55:47.537673 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 19:55:47.538581 | orchestrator | 2025-06-02 19:55:47.541684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:47.541729 | orchestrator | Monday 02 June 2025 19:55:47 +0000 (0:00:00.648) 0:00:23.513 *********** 2025-06-02 19:55:47.740550 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:47.742112 | orchestrator | 2025-06-02 19:55:47.744578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:47.746782 | orchestrator | Monday 02 June 2025 19:55:47 +0000 (0:00:00.203) 0:00:23.716 *********** 2025-06-02 19:55:47.955034 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:47.957400 | orchestrator | 2025-06-02 19:55:47.961845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:47.961896 | orchestrator | Monday 02 June 2025 19:55:47 +0000 (0:00:00.215) 0:00:23.932 *********** 2025-06-02 19:55:48.147854 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:48.149628 | orchestrator | 2025-06-02 19:55:48.152896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:48.152945 | orchestrator | Monday 02 June 2025 19:55:48 +0000 (0:00:00.192) 0:00:24.125 *********** 2025-06-02 19:55:48.350073 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:48.354104 | orchestrator | 2025-06-02 19:55:48.354163 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 19:55:48.354177 | orchestrator | Monday 02 June 2025 19:55:48 +0000 (0:00:00.201) 0:00:24.326 *********** 2025-06-02 19:55:48.713195 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-02 19:55:48.714617 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-02 19:55:48.716090 | orchestrator | 2025-06-02 19:55:48.718570 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 19:55:48.719845 | orchestrator | Monday 02 June 2025 19:55:48 +0000 (0:00:00.358) 0:00:24.684 *********** 2025-06-02 19:55:48.830645 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:48.830742 | orchestrator | 2025-06-02 19:55:48.830757 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 19:55:48.831725 | orchestrator | Monday 02 June 2025 19:55:48 +0000 (0:00:00.124) 0:00:24.809 *********** 2025-06-02 19:55:48.987126 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:48.987231 | orchestrator | 2025-06-02 19:55:48.987744 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 19:55:48.988112 | orchestrator | Monday 02 June 2025 19:55:48 +0000 (0:00:00.155) 0:00:24.964 *********** 2025-06-02 19:55:49.091418 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:49.092718 | orchestrator | 2025-06-02 19:55:49.093378 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 19:55:49.094746 | orchestrator | Monday 02 June 2025 19:55:49 +0000 (0:00:00.106) 0:00:25.071 *********** 2025-06-02 19:55:49.227201 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:49.232229 | orchestrator | 2025-06-02 19:55:49.232282 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 19:55:49.233833 | orchestrator | Monday 02 June 2025 19:55:49 +0000 (0:00:00.134) 0:00:25.205 *********** 2025-06-02 19:55:49.391827 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b573976-5050-5314-b52d-708d81144fb3'}}) 2025-06-02 19:55:49.393525 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dc535ca-7422-5c6b-b80a-593b3887af48'}}) 2025-06-02 19:55:49.395883 | orchestrator | 2025-06-02 19:55:49.395932 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 19:55:49.396948 | orchestrator | Monday 02 June 2025 19:55:49 +0000 (0:00:00.165) 0:00:25.370 *********** 2025-06-02 19:55:49.560004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b573976-5050-5314-b52d-708d81144fb3'}})  2025-06-02 19:55:49.561019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dc535ca-7422-5c6b-b80a-593b3887af48'}})  2025-06-02 19:55:49.562648 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:49.565705 | orchestrator | 2025-06-02 19:55:49.566342 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 19:55:49.566944 | orchestrator | Monday 02 June 2025 19:55:49 +0000 (0:00:00.168) 0:00:25.539 *********** 2025-06-02 19:55:49.698953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b573976-5050-5314-b52d-708d81144fb3'}})  2025-06-02 19:55:49.699663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dc535ca-7422-5c6b-b80a-593b3887af48'}})  2025-06-02 19:55:49.700540 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:49.700600 | orchestrator | 2025-06-02 19:55:49.700702 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 19:55:49.703883 | orchestrator | Monday 02 June 2025 19:55:49 +0000 (0:00:00.139) 0:00:25.678 *********** 2025-06-02 19:55:49.834607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b573976-5050-5314-b52d-708d81144fb3'}})  2025-06-02 19:55:49.836082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dc535ca-7422-5c6b-b80a-593b3887af48'}})  2025-06-02 19:55:49.837508 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:49.838793 | orchestrator | 2025-06-02 19:55:49.839618 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 19:55:49.840616 | orchestrator | Monday 02 June 2025 19:55:49 +0000 (0:00:00.135) 0:00:25.814 *********** 2025-06-02 19:55:49.940633 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:49.941300 | orchestrator | 2025-06-02 19:55:49.943095 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 19:55:49.943412 | orchestrator | Monday 02 June 2025 19:55:49 +0000 (0:00:00.105) 0:00:25.919 *********** 2025-06-02 19:55:50.063090 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:50.063983 | orchestrator | 2025-06-02 19:55:50.065015 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 19:55:50.066903 | orchestrator | Monday 02 June 2025 19:55:50 +0000 (0:00:00.121) 0:00:26.041 *********** 2025-06-02 19:55:50.187405 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:50.189065 | orchestrator | 2025-06-02 19:55:50.189178 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 19:55:50.190560 | orchestrator | Monday 02 June 2025 19:55:50 +0000 (0:00:00.124) 0:00:26.165 *********** 2025-06-02 19:55:50.446765 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:50.446975 | orchestrator | 2025-06-02 19:55:50.448141 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 19:55:50.449819 | orchestrator | Monday 02 June 2025 19:55:50 +0000 (0:00:00.259) 0:00:26.425 *********** 2025-06-02 19:55:50.567925 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:50.568017 | orchestrator | 2025-06-02 19:55:50.568261 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 19:55:50.568791 | orchestrator | Monday 02 June 2025 19:55:50 +0000 (0:00:00.120) 0:00:26.546 *********** 2025-06-02 19:55:50.694495 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:55:50.696794 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:55:50.697450 | orchestrator |  "sdb": { 2025-06-02 19:55:50.697476 | orchestrator |  "osd_lvm_uuid": "0b573976-5050-5314-b52d-708d81144fb3" 2025-06-02 19:55:50.697810 | orchestrator |  }, 2025-06-02 19:55:50.699572 | orchestrator |  "sdc": { 2025-06-02 19:55:50.699843 | orchestrator |  "osd_lvm_uuid": "1dc535ca-7422-5c6b-b80a-593b3887af48" 2025-06-02 19:55:50.700536 | orchestrator |  } 2025-06-02 19:55:50.701169 | orchestrator |  } 2025-06-02 19:55:50.701783 | orchestrator | } 2025-06-02 19:55:50.702317 | orchestrator | 2025-06-02 19:55:50.702992 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 19:55:50.703363 | orchestrator | Monday 02 June 2025 19:55:50 +0000 (0:00:00.126) 0:00:26.672 *********** 2025-06-02 19:55:50.817949 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:50.819215 | orchestrator | 2025-06-02 19:55:50.820596 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 19:55:50.821221 | orchestrator | Monday 02 June 2025 19:55:50 +0000 (0:00:00.122) 0:00:26.795 *********** 2025-06-02 19:55:50.936573 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:50.936741 | orchestrator | 2025-06-02 19:55:50.937353 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 19:55:50.938319 | orchestrator | Monday 02 June 2025 19:55:50 +0000 (0:00:00.120) 0:00:26.915 *********** 2025-06-02 19:55:51.054523 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:51.055601 | orchestrator | 2025-06-02 19:55:51.056195 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 19:55:51.059991 | orchestrator | Monday 02 June 2025 19:55:51 +0000 (0:00:00.116) 0:00:27.032 *********** 2025-06-02 19:55:51.244320 | orchestrator | changed: [testbed-node-4] => { 2025-06-02 19:55:51.245565 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 19:55:51.247373 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:55:51.248696 | orchestrator |  "sdb": { 2025-06-02 19:55:51.249841 | orchestrator |  "osd_lvm_uuid": "0b573976-5050-5314-b52d-708d81144fb3" 2025-06-02 19:55:51.251754 | orchestrator |  }, 2025-06-02 19:55:51.252904 | orchestrator |  "sdc": { 2025-06-02 19:55:51.254116 | orchestrator |  "osd_lvm_uuid": "1dc535ca-7422-5c6b-b80a-593b3887af48" 2025-06-02 19:55:51.254484 | orchestrator |  } 2025-06-02 19:55:51.256609 | orchestrator |  }, 2025-06-02 19:55:51.257647 | orchestrator |  "lvm_volumes": [ 2025-06-02 19:55:51.258407 | orchestrator |  { 2025-06-02 19:55:51.259504 | orchestrator |  "data": "osd-block-0b573976-5050-5314-b52d-708d81144fb3", 2025-06-02 19:55:51.260161 | orchestrator |  "data_vg": "ceph-0b573976-5050-5314-b52d-708d81144fb3" 2025-06-02 19:55:51.261006 | orchestrator |  }, 2025-06-02 19:55:51.261558 | orchestrator |  { 2025-06-02 19:55:51.262108 | orchestrator |  "data": "osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48", 2025-06-02 19:55:51.262689 | orchestrator |  "data_vg": "ceph-1dc535ca-7422-5c6b-b80a-593b3887af48" 2025-06-02 19:55:51.263362 | orchestrator |  } 2025-06-02 19:55:51.263842 | orchestrator |  ] 2025-06-02 19:55:51.264586 | orchestrator |  } 2025-06-02 19:55:51.265093 | orchestrator | } 2025-06-02 19:55:51.265901 | orchestrator | 2025-06-02 19:55:51.266482 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 19:55:51.266945 | orchestrator | Monday 02 June 2025 19:55:51 +0000 (0:00:00.189) 0:00:27.222 *********** 2025-06-02 19:55:52.236501 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 19:55:52.236611 | orchestrator | 2025-06-02 19:55:52.237403 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 19:55:52.238540 | orchestrator | 2025-06-02 19:55:52.239180 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:55:52.240161 | orchestrator | Monday 02 June 2025 19:55:52 +0000 (0:00:00.991) 0:00:28.213 *********** 2025-06-02 19:55:52.612484 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 19:55:52.612719 | orchestrator | 2025-06-02 19:55:52.612998 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:55:52.613310 | orchestrator | Monday 02 June 2025 19:55:52 +0000 (0:00:00.378) 0:00:28.591 *********** 2025-06-02 19:55:53.106230 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:53.107238 | orchestrator | 2025-06-02 19:55:53.107684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:53.108371 | orchestrator | Monday 02 June 2025 19:55:53 +0000 (0:00:00.492) 0:00:29.084 *********** 2025-06-02 19:55:53.438088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 19:55:53.438352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 19:55:53.438973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 19:55:53.439736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 19:55:53.442417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 19:55:53.442543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 19:55:53.442558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 19:55:53.442570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 19:55:53.442866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 19:55:53.443605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 19:55:53.444262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 19:55:53.444696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 19:55:53.445485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 19:55:53.445975 | orchestrator | 2025-06-02 19:55:53.446291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:53.448044 | orchestrator | Monday 02 June 2025 19:55:53 +0000 (0:00:00.333) 0:00:29.417 *********** 2025-06-02 19:55:53.628067 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:53.628171 | orchestrator | 2025-06-02 19:55:53.628186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:53.628749 | orchestrator | Monday 02 June 2025 19:55:53 +0000 (0:00:00.189) 0:00:29.607 *********** 2025-06-02 19:55:53.812972 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:53.813698 | orchestrator | 2025-06-02 19:55:53.814618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:53.814920 | orchestrator | Monday 02 June 2025 19:55:53 +0000 (0:00:00.182) 0:00:29.790 *********** 2025-06-02 19:55:54.014163 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:54.014262 | orchestrator | 2025-06-02 19:55:54.015042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:54.016897 | orchestrator | Monday 02 June 2025 19:55:54 +0000 (0:00:00.202) 0:00:29.993 *********** 2025-06-02 19:55:54.199290 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:54.199580 | orchestrator | 2025-06-02 19:55:54.201259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:54.202247 | orchestrator | Monday 02 June 2025 19:55:54 +0000 (0:00:00.183) 0:00:30.176 *********** 2025-06-02 19:55:54.387783 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:54.388510 | orchestrator | 2025-06-02 19:55:54.389833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:54.391533 | orchestrator | Monday 02 June 2025 19:55:54 +0000 (0:00:00.189) 0:00:30.366 *********** 2025-06-02 19:55:54.575070 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:54.576085 | orchestrator | 2025-06-02 19:55:54.577712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:54.578242 | orchestrator | Monday 02 June 2025 19:55:54 +0000 (0:00:00.188) 0:00:30.554 *********** 2025-06-02 19:55:54.745738 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:54.746357 | orchestrator | 2025-06-02 19:55:54.747510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:54.748250 | orchestrator | Monday 02 June 2025 19:55:54 +0000 (0:00:00.169) 0:00:30.723 *********** 2025-06-02 19:55:54.926772 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:54.926983 | orchestrator | 2025-06-02 19:55:54.927684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:54.928234 | orchestrator | Monday 02 June 2025 19:55:54 +0000 (0:00:00.180) 0:00:30.904 *********** 2025-06-02 19:55:55.459583 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3) 2025-06-02 19:55:55.459763 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3) 2025-06-02 19:55:55.461213 | orchestrator | 2025-06-02 19:55:55.461795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:55.462568 | orchestrator | Monday 02 June 2025 19:55:55 +0000 (0:00:00.533) 0:00:31.438 *********** 2025-06-02 19:55:56.163154 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76) 2025-06-02 19:55:56.164773 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76) 2025-06-02 19:55:56.165508 | orchestrator | 2025-06-02 19:55:56.166151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:56.166868 | orchestrator | Monday 02 June 2025 19:55:56 +0000 (0:00:00.702) 0:00:32.141 *********** 2025-06-02 19:55:56.561327 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6) 2025-06-02 19:55:56.561748 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6) 2025-06-02 19:55:56.562889 | orchestrator | 2025-06-02 19:55:56.563830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:56.564511 | orchestrator | Monday 02 June 2025 19:55:56 +0000 (0:00:00.399) 0:00:32.540 *********** 2025-06-02 19:55:56.949518 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f) 2025-06-02 19:55:56.949760 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f) 2025-06-02 19:55:56.950728 | orchestrator | 2025-06-02 19:55:56.951584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:56.952304 | orchestrator | Monday 02 June 2025 19:55:56 +0000 (0:00:00.382) 0:00:32.923 *********** 2025-06-02 19:55:57.281557 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:55:57.281671 | orchestrator | 2025-06-02 19:55:57.281694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:57.281713 | orchestrator | Monday 02 June 2025 19:55:57 +0000 (0:00:00.336) 0:00:33.259 *********** 2025-06-02 19:55:57.637342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 19:55:57.638519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 19:55:57.639827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 19:55:57.641804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 19:55:57.643248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 19:55:57.645394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 19:55:57.646226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 19:55:57.647448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 19:55:57.648764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 19:55:57.649415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 19:55:57.650080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 19:55:57.651104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 19:55:57.651563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 19:55:57.652279 | orchestrator | 2025-06-02 19:55:57.652953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:57.653544 | orchestrator | Monday 02 June 2025 19:55:57 +0000 (0:00:00.356) 0:00:33.616 *********** 2025-06-02 19:55:57.834334 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:57.835294 | orchestrator | 2025-06-02 19:55:57.836598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:57.837306 | orchestrator | Monday 02 June 2025 19:55:57 +0000 (0:00:00.197) 0:00:33.813 *********** 2025-06-02 19:55:58.010806 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:58.011490 | orchestrator | 2025-06-02 19:55:58.012668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:58.013019 | orchestrator | Monday 02 June 2025 19:55:58 +0000 (0:00:00.174) 0:00:33.987 *********** 2025-06-02 19:55:58.186843 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:58.187575 | orchestrator | 2025-06-02 19:55:58.188036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:58.188607 | orchestrator | Monday 02 June 2025 19:55:58 +0000 (0:00:00.178) 0:00:34.166 *********** 2025-06-02 19:55:58.362633 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:58.363365 | orchestrator | 2025-06-02 19:55:58.363964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:58.364710 | orchestrator | Monday 02 June 2025 19:55:58 +0000 (0:00:00.175) 0:00:34.341 *********** 2025-06-02 19:55:58.544525 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:58.544882 | orchestrator | 2025-06-02 19:55:58.546137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:58.548502 | orchestrator | Monday 02 June 2025 19:55:58 +0000 (0:00:00.180) 0:00:34.522 *********** 2025-06-02 19:55:59.035141 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:59.035306 | orchestrator | 2025-06-02 19:55:59.035323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:59.035407 | orchestrator | Monday 02 June 2025 19:55:59 +0000 (0:00:00.490) 0:00:35.012 *********** 2025-06-02 19:55:59.266259 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:59.266661 | orchestrator | 2025-06-02 19:55:59.269172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:59.270078 | orchestrator | Monday 02 June 2025 19:55:59 +0000 (0:00:00.231) 0:00:35.244 *********** 2025-06-02 19:55:59.452112 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:59.452496 | orchestrator | 2025-06-02 19:55:59.452615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:59.452747 | orchestrator | Monday 02 June 2025 19:55:59 +0000 (0:00:00.185) 0:00:35.429 *********** 2025-06-02 19:56:00.053372 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 19:56:00.054958 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 19:56:00.056116 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 19:56:00.057561 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 19:56:00.058144 | orchestrator | 2025-06-02 19:56:00.058762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:00.059415 | orchestrator | Monday 02 June 2025 19:56:00 +0000 (0:00:00.601) 0:00:36.030 *********** 2025-06-02 19:56:00.239542 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:00.239765 | orchestrator | 2025-06-02 19:56:00.240215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:00.240962 | orchestrator | Monday 02 June 2025 19:56:00 +0000 (0:00:00.187) 0:00:36.218 *********** 2025-06-02 19:56:00.435990 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:00.436237 | orchestrator | 2025-06-02 19:56:00.436993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:00.437887 | orchestrator | Monday 02 June 2025 19:56:00 +0000 (0:00:00.196) 0:00:36.414 *********** 2025-06-02 19:56:00.612304 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:00.612657 | orchestrator | 2025-06-02 19:56:00.613404 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:00.614568 | orchestrator | Monday 02 June 2025 19:56:00 +0000 (0:00:00.177) 0:00:36.591 *********** 2025-06-02 19:56:00.797335 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:00.797939 | orchestrator | 2025-06-02 19:56:00.798852 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 19:56:00.799304 | orchestrator | Monday 02 June 2025 19:56:00 +0000 (0:00:00.184) 0:00:36.776 *********** 2025-06-02 19:56:00.955231 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-02 19:56:00.955604 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-02 19:56:00.955985 | orchestrator | 2025-06-02 19:56:00.956586 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 19:56:00.957200 | orchestrator | Monday 02 June 2025 19:56:00 +0000 (0:00:00.156) 0:00:36.933 *********** 2025-06-02 19:56:01.083654 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:01.083775 | orchestrator | 2025-06-02 19:56:01.084077 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 19:56:01.084901 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:00.128) 0:00:37.061 *********** 2025-06-02 19:56:01.214005 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:01.214352 | orchestrator | 2025-06-02 19:56:01.214667 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 19:56:01.215394 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:00.130) 0:00:37.191 *********** 2025-06-02 19:56:01.330374 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:01.330565 | orchestrator | 2025-06-02 19:56:01.331800 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 19:56:01.332405 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:00.116) 0:00:37.308 *********** 2025-06-02 19:56:01.608704 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:56:01.609171 | orchestrator | 2025-06-02 19:56:01.612133 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 19:56:01.612193 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:00.278) 0:00:37.586 *********** 2025-06-02 19:56:01.776141 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1b51fe1f-19f9-5db6-a741-38088f1d71cf'}}) 2025-06-02 19:56:01.776346 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2dc54921-ef42-515a-84de-1f3d0e017dc1'}}) 2025-06-02 19:56:01.776811 | orchestrator | 2025-06-02 19:56:01.777242 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 19:56:01.781064 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:00.168) 0:00:37.755 *********** 2025-06-02 19:56:01.923784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1b51fe1f-19f9-5db6-a741-38088f1d71cf'}})  2025-06-02 19:56:01.926875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2dc54921-ef42-515a-84de-1f3d0e017dc1'}})  2025-06-02 19:56:01.926923 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:01.926937 | orchestrator | 2025-06-02 19:56:01.927263 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 19:56:01.927584 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:00.146) 0:00:37.901 *********** 2025-06-02 19:56:02.067408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1b51fe1f-19f9-5db6-a741-38088f1d71cf'}})  2025-06-02 19:56:02.068325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2dc54921-ef42-515a-84de-1f3d0e017dc1'}})  2025-06-02 19:56:02.070064 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:02.070388 | orchestrator | 2025-06-02 19:56:02.071171 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 19:56:02.072237 | orchestrator | Monday 02 June 2025 19:56:02 +0000 (0:00:00.144) 0:00:38.046 *********** 2025-06-02 19:56:02.206079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1b51fe1f-19f9-5db6-a741-38088f1d71cf'}})  2025-06-02 19:56:02.206186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2dc54921-ef42-515a-84de-1f3d0e017dc1'}})  2025-06-02 19:56:02.207052 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:02.207078 | orchestrator | 2025-06-02 19:56:02.207738 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 19:56:02.208189 | orchestrator | Monday 02 June 2025 19:56:02 +0000 (0:00:00.135) 0:00:38.181 *********** 2025-06-02 19:56:02.318924 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:56:02.319846 | orchestrator | 2025-06-02 19:56:02.320787 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 19:56:02.321547 | orchestrator | Monday 02 June 2025 19:56:02 +0000 (0:00:00.116) 0:00:38.298 *********** 2025-06-02 19:56:02.436127 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:56:02.436566 | orchestrator | 2025-06-02 19:56:02.437892 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 19:56:02.438700 | orchestrator | Monday 02 June 2025 19:56:02 +0000 (0:00:00.116) 0:00:38.414 *********** 2025-06-02 19:56:02.577487 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:02.577963 | orchestrator | 2025-06-02 19:56:02.579511 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 19:56:02.580042 | orchestrator | Monday 02 June 2025 19:56:02 +0000 (0:00:00.141) 0:00:38.556 *********** 2025-06-02 19:56:02.689511 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:02.690481 | orchestrator | 2025-06-02 19:56:02.690600 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 19:56:02.691138 | orchestrator | Monday 02 June 2025 19:56:02 +0000 (0:00:00.112) 0:00:38.668 *********** 2025-06-02 19:56:02.815974 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:02.816095 | orchestrator | 2025-06-02 19:56:02.816686 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 19:56:02.816847 | orchestrator | Monday 02 June 2025 19:56:02 +0000 (0:00:00.124) 0:00:38.793 *********** 2025-06-02 19:56:02.945310 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:56:02.945696 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:56:02.946474 | orchestrator |  "sdb": { 2025-06-02 19:56:02.947504 | orchestrator |  "osd_lvm_uuid": "1b51fe1f-19f9-5db6-a741-38088f1d71cf" 2025-06-02 19:56:02.947721 | orchestrator |  }, 2025-06-02 19:56:02.950956 | orchestrator |  "sdc": { 2025-06-02 19:56:02.951005 | orchestrator |  "osd_lvm_uuid": "2dc54921-ef42-515a-84de-1f3d0e017dc1" 2025-06-02 19:56:02.951334 | orchestrator |  } 2025-06-02 19:56:02.951356 | orchestrator |  } 2025-06-02 19:56:02.951933 | orchestrator | } 2025-06-02 19:56:02.952572 | orchestrator | 2025-06-02 19:56:02.952595 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 19:56:02.954368 | orchestrator | Monday 02 June 2025 19:56:02 +0000 (0:00:00.131) 0:00:38.924 *********** 2025-06-02 19:56:03.052199 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:03.052300 | orchestrator | 2025-06-02 19:56:03.052623 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 19:56:03.053753 | orchestrator | Monday 02 June 2025 19:56:03 +0000 (0:00:00.105) 0:00:39.030 *********** 2025-06-02 19:56:03.295596 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:03.297213 | orchestrator | 2025-06-02 19:56:03.298583 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 19:56:03.299995 | orchestrator | Monday 02 June 2025 19:56:03 +0000 (0:00:00.242) 0:00:39.273 *********** 2025-06-02 19:56:03.432140 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:56:03.434862 | orchestrator | 2025-06-02 19:56:03.434933 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 19:56:03.435154 | orchestrator | Monday 02 June 2025 19:56:03 +0000 (0:00:00.136) 0:00:39.410 *********** 2025-06-02 19:56:03.613762 | orchestrator | changed: [testbed-node-5] => { 2025-06-02 19:56:03.614126 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 19:56:03.614528 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:56:03.617464 | orchestrator |  "sdb": { 2025-06-02 19:56:03.617518 | orchestrator |  "osd_lvm_uuid": "1b51fe1f-19f9-5db6-a741-38088f1d71cf" 2025-06-02 19:56:03.617533 | orchestrator |  }, 2025-06-02 19:56:03.617546 | orchestrator |  "sdc": { 2025-06-02 19:56:03.617799 | orchestrator |  "osd_lvm_uuid": "2dc54921-ef42-515a-84de-1f3d0e017dc1" 2025-06-02 19:56:03.618329 | orchestrator |  } 2025-06-02 19:56:03.618701 | orchestrator |  }, 2025-06-02 19:56:03.619254 | orchestrator |  "lvm_volumes": [ 2025-06-02 19:56:03.619631 | orchestrator |  { 2025-06-02 19:56:03.620109 | orchestrator |  "data": "osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf", 2025-06-02 19:56:03.621069 | orchestrator |  "data_vg": "ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf" 2025-06-02 19:56:03.621995 | orchestrator |  }, 2025-06-02 19:56:03.622220 | orchestrator |  { 2025-06-02 19:56:03.622779 | orchestrator |  "data": "osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1", 2025-06-02 19:56:03.623053 | orchestrator |  "data_vg": "ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1" 2025-06-02 19:56:03.623763 | orchestrator |  } 2025-06-02 19:56:03.624560 | orchestrator |  ] 2025-06-02 19:56:03.625313 | orchestrator |  } 2025-06-02 19:56:03.626102 | orchestrator | } 2025-06-02 19:56:03.626844 | orchestrator | 2025-06-02 19:56:03.627314 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 19:56:03.628003 | orchestrator | Monday 02 June 2025 19:56:03 +0000 (0:00:00.182) 0:00:39.592 *********** 2025-06-02 19:56:04.484146 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 19:56:04.488208 | orchestrator | 2025-06-02 19:56:04.489762 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:56:04.489869 | orchestrator | 2025-06-02 19:56:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:56:04.489888 | orchestrator | 2025-06-02 19:56:04 | INFO  | Please wait and do not abort execution. 2025-06-02 19:56:04.490396 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 19:56:04.491778 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 19:56:04.493068 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 19:56:04.493560 | orchestrator | 2025-06-02 19:56:04.494093 | orchestrator | 2025-06-02 19:56:04.494695 | orchestrator | 2025-06-02 19:56:04.495325 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:56:04.496610 | orchestrator | Monday 02 June 2025 19:56:04 +0000 (0:00:00.867) 0:00:40.460 *********** 2025-06-02 19:56:04.496648 | orchestrator | =============================================================================== 2025-06-02 19:56:04.497128 | orchestrator | Write configuration file ------------------------------------------------ 3.97s 2025-06-02 19:56:04.497621 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2025-06-02 19:56:04.498059 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2025-06-02 19:56:04.498610 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2025-06-02 19:56:04.498903 | orchestrator | Get initial list of available block devices ----------------------------- 0.97s 2025-06-02 19:56:04.499903 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.91s 2025-06-02 19:56:04.500013 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2025-06-02 19:56:04.500445 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-06-02 19:56:04.500894 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.71s 2025-06-02 19:56:04.501305 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-06-02 19:56:04.501557 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-06-02 19:56:04.502056 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.70s 2025-06-02 19:56:04.502520 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-06-02 19:56:04.502910 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-06-02 19:56:04.503343 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-06-02 19:56:04.503766 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-06-02 19:56:04.504090 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2025-06-02 19:56:04.505245 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.56s 2025-06-02 19:56:04.505545 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2025-06-02 19:56:04.507062 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2025-06-02 19:56:16.523854 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:56:16.523961 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:56:16.523978 | orchestrator | Registering Redlock._release_script 2025-06-02 19:56:16.573142 | orchestrator | 2025-06-02 19:56:16 | INFO  | Task cfab4488-77d3-42a1-9175-fca63b10eb38 (sync inventory) is running in background. Output coming soon. 2025-06-02 19:56:34.628045 | orchestrator | 2025-06-02 19:56:17 | INFO  | Starting group_vars file reorganization 2025-06-02 19:56:34.628155 | orchestrator | 2025-06-02 19:56:17 | INFO  | Moved 0 file(s) to their respective directories 2025-06-02 19:56:34.628171 | orchestrator | 2025-06-02 19:56:17 | INFO  | Group_vars file reorganization completed 2025-06-02 19:56:34.628183 | orchestrator | 2025-06-02 19:56:19 | INFO  | Starting variable preparation from inventory 2025-06-02 19:56:34.628195 | orchestrator | 2025-06-02 19:56:20 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-02 19:56:34.628207 | orchestrator | 2025-06-02 19:56:20 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-02 19:56:34.628242 | orchestrator | 2025-06-02 19:56:20 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-02 19:56:34.628255 | orchestrator | 2025-06-02 19:56:20 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-02 19:56:34.628267 | orchestrator | 2025-06-02 19:56:20 | INFO  | Variable preparation completed: 2025-06-02 19:56:34.628279 | orchestrator | 2025-06-02 19:56:21 | INFO  | Starting inventory overwrite handling 2025-06-02 19:56:34.628290 | orchestrator | 2025-06-02 19:56:21 | INFO  | Handling group overwrites in 99-overwrite 2025-06-02 19:56:34.628302 | orchestrator | 2025-06-02 19:56:21 | INFO  | Removing group frr:children from 60-generic 2025-06-02 19:56:34.628313 | orchestrator | 2025-06-02 19:56:21 | INFO  | Removing group storage:children from 50-kolla 2025-06-02 19:56:34.628325 | orchestrator | 2025-06-02 19:56:21 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-02 19:56:34.628345 | orchestrator | 2025-06-02 19:56:21 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-02 19:56:34.628357 | orchestrator | 2025-06-02 19:56:21 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-02 19:56:34.628369 | orchestrator | 2025-06-02 19:56:21 | INFO  | Handling group overwrites in 20-roles 2025-06-02 19:56:34.628380 | orchestrator | 2025-06-02 19:56:21 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-02 19:56:34.628392 | orchestrator | 2025-06-02 19:56:21 | INFO  | Removed 6 group(s) in total 2025-06-02 19:56:34.628447 | orchestrator | 2025-06-02 19:56:21 | INFO  | Inventory overwrite handling completed 2025-06-02 19:56:34.628466 | orchestrator | 2025-06-02 19:56:22 | INFO  | Starting merge of inventory files 2025-06-02 19:56:34.628485 | orchestrator | 2025-06-02 19:56:22 | INFO  | Inventory files merged successfully 2025-06-02 19:56:34.628504 | orchestrator | 2025-06-02 19:56:26 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-02 19:56:34.628522 | orchestrator | 2025-06-02 19:56:33 | INFO  | Successfully wrote ClusterShell configuration 2025-06-02 19:56:34.628537 | orchestrator | [master 4f34528] 2025-06-02-19-56 2025-06-02 19:56:34.628549 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-06-02 19:56:36.291082 | orchestrator | 2025-06-02 19:56:36 | INFO  | Task b66dcace-f95b-4813-88b5-f1a1030be8b6 (ceph-create-lvm-devices) was prepared for execution. 2025-06-02 19:56:36.291185 | orchestrator | 2025-06-02 19:56:36 | INFO  | It takes a moment until task b66dcace-f95b-4813-88b5-f1a1030be8b6 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-02 19:56:40.103622 | orchestrator | 2025-06-02 19:56:40.104488 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 19:56:40.105237 | orchestrator | 2025-06-02 19:56:40.107216 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:56:40.107780 | orchestrator | Monday 02 June 2025 19:56:40 +0000 (0:00:00.282) 0:00:00.282 *********** 2025-06-02 19:56:40.321609 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 19:56:40.321780 | orchestrator | 2025-06-02 19:56:40.322690 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:56:40.323464 | orchestrator | Monday 02 June 2025 19:56:40 +0000 (0:00:00.220) 0:00:00.502 *********** 2025-06-02 19:56:40.522123 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:40.523195 | orchestrator | 2025-06-02 19:56:40.524222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:40.524562 | orchestrator | Monday 02 June 2025 19:56:40 +0000 (0:00:00.201) 0:00:00.703 *********** 2025-06-02 19:56:40.880677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 19:56:40.881147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 19:56:40.882233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 19:56:40.882872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 19:56:40.883314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 19:56:40.884793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 19:56:40.885873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 19:56:40.886967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 19:56:40.887125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 19:56:40.887652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 19:56:40.888142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 19:56:40.888856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 19:56:40.888991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 19:56:40.889518 | orchestrator | 2025-06-02 19:56:40.890356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:40.890465 | orchestrator | Monday 02 June 2025 19:56:40 +0000 (0:00:00.358) 0:00:01.061 *********** 2025-06-02 19:56:41.241910 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:41.242108 | orchestrator | 2025-06-02 19:56:41.243290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:41.244371 | orchestrator | Monday 02 June 2025 19:56:41 +0000 (0:00:00.359) 0:00:01.421 *********** 2025-06-02 19:56:41.413030 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:41.413295 | orchestrator | 2025-06-02 19:56:41.414203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:41.415212 | orchestrator | Monday 02 June 2025 19:56:41 +0000 (0:00:00.172) 0:00:01.593 *********** 2025-06-02 19:56:41.588277 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:41.589158 | orchestrator | 2025-06-02 19:56:41.590369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:41.590919 | orchestrator | Monday 02 June 2025 19:56:41 +0000 (0:00:00.175) 0:00:01.769 *********** 2025-06-02 19:56:41.760847 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:41.762208 | orchestrator | 2025-06-02 19:56:41.762859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:41.763802 | orchestrator | Monday 02 June 2025 19:56:41 +0000 (0:00:00.171) 0:00:01.941 *********** 2025-06-02 19:56:41.942233 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:41.942737 | orchestrator | 2025-06-02 19:56:41.943765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:41.944211 | orchestrator | Monday 02 June 2025 19:56:41 +0000 (0:00:00.181) 0:00:02.122 *********** 2025-06-02 19:56:42.122463 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:42.123117 | orchestrator | 2025-06-02 19:56:42.125165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:42.125889 | orchestrator | Monday 02 June 2025 19:56:42 +0000 (0:00:00.180) 0:00:02.303 *********** 2025-06-02 19:56:42.308074 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:42.309282 | orchestrator | 2025-06-02 19:56:42.310636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:42.311563 | orchestrator | Monday 02 June 2025 19:56:42 +0000 (0:00:00.184) 0:00:02.487 *********** 2025-06-02 19:56:42.479605 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:42.479831 | orchestrator | 2025-06-02 19:56:42.481459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:42.481503 | orchestrator | Monday 02 June 2025 19:56:42 +0000 (0:00:00.172) 0:00:02.660 *********** 2025-06-02 19:56:42.878975 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80) 2025-06-02 19:56:42.879074 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80) 2025-06-02 19:56:42.880164 | orchestrator | 2025-06-02 19:56:42.881893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:42.882288 | orchestrator | Monday 02 June 2025 19:56:42 +0000 (0:00:00.398) 0:00:03.059 *********** 2025-06-02 19:56:43.254503 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250) 2025-06-02 19:56:43.254603 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250) 2025-06-02 19:56:43.254673 | orchestrator | 2025-06-02 19:56:43.254750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:43.255253 | orchestrator | Monday 02 June 2025 19:56:43 +0000 (0:00:00.373) 0:00:03.433 *********** 2025-06-02 19:56:43.768001 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773) 2025-06-02 19:56:43.768963 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773) 2025-06-02 19:56:43.769090 | orchestrator | 2025-06-02 19:56:43.769988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:43.770177 | orchestrator | Monday 02 June 2025 19:56:43 +0000 (0:00:00.516) 0:00:03.949 *********** 2025-06-02 19:56:44.252050 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335) 2025-06-02 19:56:44.254445 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335) 2025-06-02 19:56:44.255484 | orchestrator | 2025-06-02 19:56:44.257471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:56:44.258592 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.482) 0:00:04.432 *********** 2025-06-02 19:56:44.812041 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:56:44.813020 | orchestrator | 2025-06-02 19:56:44.814150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:44.815201 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.560) 0:00:04.992 *********** 2025-06-02 19:56:45.186522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 19:56:45.186765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 19:56:45.187259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 19:56:45.188110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 19:56:45.188794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 19:56:45.189286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 19:56:45.190092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 19:56:45.190659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 19:56:45.191172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 19:56:45.191814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 19:56:45.192217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 19:56:45.192655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 19:56:45.193063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 19:56:45.193565 | orchestrator | 2025-06-02 19:56:45.194066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:45.194585 | orchestrator | Monday 02 June 2025 19:56:45 +0000 (0:00:00.374) 0:00:05.367 *********** 2025-06-02 19:56:45.377870 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:45.377959 | orchestrator | 2025-06-02 19:56:45.378613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:45.379214 | orchestrator | Monday 02 June 2025 19:56:45 +0000 (0:00:00.188) 0:00:05.556 *********** 2025-06-02 19:56:45.555333 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:45.555480 | orchestrator | 2025-06-02 19:56:45.556409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:45.557207 | orchestrator | Monday 02 June 2025 19:56:45 +0000 (0:00:00.179) 0:00:05.736 *********** 2025-06-02 19:56:45.732018 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:45.732620 | orchestrator | 2025-06-02 19:56:45.734329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:45.734357 | orchestrator | Monday 02 June 2025 19:56:45 +0000 (0:00:00.176) 0:00:05.912 *********** 2025-06-02 19:56:45.909423 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:45.909650 | orchestrator | 2025-06-02 19:56:45.910106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:45.911456 | orchestrator | Monday 02 June 2025 19:56:45 +0000 (0:00:00.177) 0:00:06.090 *********** 2025-06-02 19:56:46.091749 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:46.092780 | orchestrator | 2025-06-02 19:56:46.093844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:46.094900 | orchestrator | Monday 02 June 2025 19:56:46 +0000 (0:00:00.181) 0:00:06.271 *********** 2025-06-02 19:56:46.272689 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:46.273162 | orchestrator | 2025-06-02 19:56:46.274318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:46.275359 | orchestrator | Monday 02 June 2025 19:56:46 +0000 (0:00:00.182) 0:00:06.453 *********** 2025-06-02 19:56:46.450484 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:46.450688 | orchestrator | 2025-06-02 19:56:46.451703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:46.452481 | orchestrator | Monday 02 June 2025 19:56:46 +0000 (0:00:00.177) 0:00:06.631 *********** 2025-06-02 19:56:46.624740 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:46.625298 | orchestrator | 2025-06-02 19:56:46.626718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:46.627633 | orchestrator | Monday 02 June 2025 19:56:46 +0000 (0:00:00.174) 0:00:06.805 *********** 2025-06-02 19:56:47.531722 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 19:56:47.532177 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 19:56:47.532299 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 19:56:47.533624 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 19:56:47.533653 | orchestrator | 2025-06-02 19:56:47.533910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:47.534633 | orchestrator | Monday 02 June 2025 19:56:47 +0000 (0:00:00.906) 0:00:07.712 *********** 2025-06-02 19:56:47.716492 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:47.716621 | orchestrator | 2025-06-02 19:56:47.717146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:47.718529 | orchestrator | Monday 02 June 2025 19:56:47 +0000 (0:00:00.184) 0:00:07.896 *********** 2025-06-02 19:56:47.897436 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:47.897535 | orchestrator | 2025-06-02 19:56:47.897615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:47.898356 | orchestrator | Monday 02 June 2025 19:56:47 +0000 (0:00:00.180) 0:00:08.076 *********** 2025-06-02 19:56:48.079143 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:48.079343 | orchestrator | 2025-06-02 19:56:48.079979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:56:48.080877 | orchestrator | Monday 02 June 2025 19:56:48 +0000 (0:00:00.182) 0:00:08.259 *********** 2025-06-02 19:56:48.261127 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:48.262044 | orchestrator | 2025-06-02 19:56:48.262839 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 19:56:48.263851 | orchestrator | Monday 02 June 2025 19:56:48 +0000 (0:00:00.182) 0:00:08.441 *********** 2025-06-02 19:56:48.392111 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:48.394595 | orchestrator | 2025-06-02 19:56:48.394808 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 19:56:48.395268 | orchestrator | Monday 02 June 2025 19:56:48 +0000 (0:00:00.129) 0:00:08.571 *********** 2025-06-02 19:56:48.602945 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5468daec-208d-5ea7-b544-bcde6bebed84'}}) 2025-06-02 19:56:48.603039 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd0ca6db9-1635-53d8-80de-4807c4d987bd'}}) 2025-06-02 19:56:48.603053 | orchestrator | 2025-06-02 19:56:48.603152 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 19:56:48.603165 | orchestrator | Monday 02 June 2025 19:56:48 +0000 (0:00:00.211) 0:00:08.782 *********** 2025-06-02 19:56:50.507914 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'}) 2025-06-02 19:56:50.508422 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'}) 2025-06-02 19:56:50.508718 | orchestrator | 2025-06-02 19:56:50.511261 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 19:56:50.512135 | orchestrator | Monday 02 June 2025 19:56:50 +0000 (0:00:01.904) 0:00:10.687 *********** 2025-06-02 19:56:50.639537 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:50.640170 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:50.641784 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:50.642284 | orchestrator | 2025-06-02 19:56:50.643345 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 19:56:50.643672 | orchestrator | Monday 02 June 2025 19:56:50 +0000 (0:00:00.132) 0:00:10.819 *********** 2025-06-02 19:56:51.989507 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'}) 2025-06-02 19:56:51.989945 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'}) 2025-06-02 19:56:51.991426 | orchestrator | 2025-06-02 19:56:51.992692 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 19:56:51.993704 | orchestrator | Monday 02 June 2025 19:56:51 +0000 (0:00:01.349) 0:00:12.168 *********** 2025-06-02 19:56:52.145491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:52.146310 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:52.147749 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:52.148766 | orchestrator | 2025-06-02 19:56:52.149505 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 19:56:52.150145 | orchestrator | Monday 02 June 2025 19:56:52 +0000 (0:00:00.157) 0:00:12.326 *********** 2025-06-02 19:56:52.279263 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:52.279775 | orchestrator | 2025-06-02 19:56:52.280791 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 19:56:52.281983 | orchestrator | Monday 02 June 2025 19:56:52 +0000 (0:00:00.133) 0:00:12.459 *********** 2025-06-02 19:56:52.581329 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:52.583319 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:52.583811 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:52.584958 | orchestrator | 2025-06-02 19:56:52.585984 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 19:56:52.586358 | orchestrator | Monday 02 June 2025 19:56:52 +0000 (0:00:00.301) 0:00:12.760 *********** 2025-06-02 19:56:52.728231 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:52.728458 | orchestrator | 2025-06-02 19:56:52.729218 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 19:56:52.729927 | orchestrator | Monday 02 June 2025 19:56:52 +0000 (0:00:00.147) 0:00:12.908 *********** 2025-06-02 19:56:52.868978 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:52.870157 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:52.871064 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:52.871866 | orchestrator | 2025-06-02 19:56:52.872984 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 19:56:52.873821 | orchestrator | Monday 02 June 2025 19:56:52 +0000 (0:00:00.140) 0:00:13.048 *********** 2025-06-02 19:56:52.983626 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:52.984086 | orchestrator | 2025-06-02 19:56:52.985046 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 19:56:52.986685 | orchestrator | Monday 02 June 2025 19:56:52 +0000 (0:00:00.115) 0:00:13.164 *********** 2025-06-02 19:56:53.119582 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:53.119689 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:53.119799 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:53.120624 | orchestrator | 2025-06-02 19:56:53.121299 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 19:56:53.121390 | orchestrator | Monday 02 June 2025 19:56:53 +0000 (0:00:00.135) 0:00:13.300 *********** 2025-06-02 19:56:53.243939 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:53.244127 | orchestrator | 2025-06-02 19:56:53.246850 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 19:56:53.246989 | orchestrator | Monday 02 June 2025 19:56:53 +0000 (0:00:00.123) 0:00:13.423 *********** 2025-06-02 19:56:53.414158 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:53.415983 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:53.417942 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:53.418579 | orchestrator | 2025-06-02 19:56:53.419474 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 19:56:53.419897 | orchestrator | Monday 02 June 2025 19:56:53 +0000 (0:00:00.169) 0:00:13.593 *********** 2025-06-02 19:56:53.567125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:53.567617 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:53.568519 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:53.569456 | orchestrator | 2025-06-02 19:56:53.570164 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 19:56:53.571435 | orchestrator | Monday 02 June 2025 19:56:53 +0000 (0:00:00.154) 0:00:13.747 *********** 2025-06-02 19:56:53.730202 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:53.731048 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:53.731644 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:53.732661 | orchestrator | 2025-06-02 19:56:53.732919 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 19:56:53.733883 | orchestrator | Monday 02 June 2025 19:56:53 +0000 (0:00:00.160) 0:00:13.907 *********** 2025-06-02 19:56:53.869401 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:53.869771 | orchestrator | 2025-06-02 19:56:53.871696 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 19:56:53.872192 | orchestrator | Monday 02 June 2025 19:56:53 +0000 (0:00:00.142) 0:00:14.050 *********** 2025-06-02 19:56:54.022785 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:54.023440 | orchestrator | 2025-06-02 19:56:54.025022 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 19:56:54.025904 | orchestrator | Monday 02 June 2025 19:56:54 +0000 (0:00:00.152) 0:00:14.203 *********** 2025-06-02 19:56:54.170948 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:54.171502 | orchestrator | 2025-06-02 19:56:54.172828 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 19:56:54.173896 | orchestrator | Monday 02 June 2025 19:56:54 +0000 (0:00:00.147) 0:00:14.351 *********** 2025-06-02 19:56:54.636209 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:56:54.636935 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 19:56:54.638821 | orchestrator | } 2025-06-02 19:56:54.639698 | orchestrator | 2025-06-02 19:56:54.641174 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 19:56:54.642251 | orchestrator | Monday 02 June 2025 19:56:54 +0000 (0:00:00.464) 0:00:14.816 *********** 2025-06-02 19:56:54.817356 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:56:54.818865 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 19:56:54.820217 | orchestrator | } 2025-06-02 19:56:54.821488 | orchestrator | 2025-06-02 19:56:54.822776 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 19:56:54.823869 | orchestrator | Monday 02 June 2025 19:56:54 +0000 (0:00:00.180) 0:00:14.996 *********** 2025-06-02 19:56:54.945089 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:56:54.945738 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 19:56:54.946487 | orchestrator | } 2025-06-02 19:56:54.947968 | orchestrator | 2025-06-02 19:56:54.948946 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 19:56:54.950145 | orchestrator | Monday 02 June 2025 19:56:54 +0000 (0:00:00.128) 0:00:15.125 *********** 2025-06-02 19:56:55.602193 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:55.602756 | orchestrator | 2025-06-02 19:56:55.603709 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 19:56:55.604345 | orchestrator | Monday 02 June 2025 19:56:55 +0000 (0:00:00.657) 0:00:15.782 *********** 2025-06-02 19:56:56.099606 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:56.099859 | orchestrator | 2025-06-02 19:56:56.100891 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 19:56:56.102553 | orchestrator | Monday 02 June 2025 19:56:56 +0000 (0:00:00.496) 0:00:16.278 *********** 2025-06-02 19:56:56.574471 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:56.574691 | orchestrator | 2025-06-02 19:56:56.575685 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 19:56:56.576232 | orchestrator | Monday 02 June 2025 19:56:56 +0000 (0:00:00.475) 0:00:16.754 *********** 2025-06-02 19:56:56.723997 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:56.724226 | orchestrator | 2025-06-02 19:56:56.724924 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 19:56:56.726837 | orchestrator | Monday 02 June 2025 19:56:56 +0000 (0:00:00.148) 0:00:16.903 *********** 2025-06-02 19:56:56.836222 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:56.837157 | orchestrator | 2025-06-02 19:56:56.838111 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 19:56:56.839204 | orchestrator | Monday 02 June 2025 19:56:56 +0000 (0:00:00.112) 0:00:17.015 *********** 2025-06-02 19:56:56.951858 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:56.952762 | orchestrator | 2025-06-02 19:56:56.953457 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 19:56:56.954779 | orchestrator | Monday 02 June 2025 19:56:56 +0000 (0:00:00.115) 0:00:17.131 *********** 2025-06-02 19:56:57.093705 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:56:57.094966 | orchestrator |  "vgs_report": { 2025-06-02 19:56:57.095790 | orchestrator |  "vg": [] 2025-06-02 19:56:57.097025 | orchestrator |  } 2025-06-02 19:56:57.098406 | orchestrator | } 2025-06-02 19:56:57.099260 | orchestrator | 2025-06-02 19:56:57.100497 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 19:56:57.101591 | orchestrator | Monday 02 June 2025 19:56:57 +0000 (0:00:00.142) 0:00:17.273 *********** 2025-06-02 19:56:57.223919 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:57.224642 | orchestrator | 2025-06-02 19:56:57.225558 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 19:56:57.226337 | orchestrator | Monday 02 June 2025 19:56:57 +0000 (0:00:00.130) 0:00:17.403 *********** 2025-06-02 19:56:57.348600 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:57.349074 | orchestrator | 2025-06-02 19:56:57.350164 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 19:56:57.350520 | orchestrator | Monday 02 June 2025 19:56:57 +0000 (0:00:00.125) 0:00:17.529 *********** 2025-06-02 19:56:57.688682 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:57.689970 | orchestrator | 2025-06-02 19:56:57.691312 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 19:56:57.692698 | orchestrator | Monday 02 June 2025 19:56:57 +0000 (0:00:00.339) 0:00:17.868 *********** 2025-06-02 19:56:57.830190 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:57.830266 | orchestrator | 2025-06-02 19:56:57.830296 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 19:56:57.830393 | orchestrator | Monday 02 June 2025 19:56:57 +0000 (0:00:00.140) 0:00:18.009 *********** 2025-06-02 19:56:57.982601 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:57.983477 | orchestrator | 2025-06-02 19:56:57.984039 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 19:56:57.984519 | orchestrator | Monday 02 June 2025 19:56:57 +0000 (0:00:00.154) 0:00:18.163 *********** 2025-06-02 19:56:58.118261 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:58.118951 | orchestrator | 2025-06-02 19:56:58.119122 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 19:56:58.119943 | orchestrator | Monday 02 June 2025 19:56:58 +0000 (0:00:00.134) 0:00:18.298 *********** 2025-06-02 19:56:58.249767 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:58.250761 | orchestrator | 2025-06-02 19:56:58.251663 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 19:56:58.252880 | orchestrator | Monday 02 June 2025 19:56:58 +0000 (0:00:00.131) 0:00:18.430 *********** 2025-06-02 19:56:58.387531 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:58.389018 | orchestrator | 2025-06-02 19:56:58.390624 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 19:56:58.391238 | orchestrator | Monday 02 June 2025 19:56:58 +0000 (0:00:00.137) 0:00:18.567 *********** 2025-06-02 19:56:58.530217 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:58.532061 | orchestrator | 2025-06-02 19:56:58.534232 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 19:56:58.534688 | orchestrator | Monday 02 June 2025 19:56:58 +0000 (0:00:00.141) 0:00:18.709 *********** 2025-06-02 19:56:58.660648 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:58.664549 | orchestrator | 2025-06-02 19:56:58.665805 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 19:56:58.666519 | orchestrator | Monday 02 June 2025 19:56:58 +0000 (0:00:00.129) 0:00:18.838 *********** 2025-06-02 19:56:58.786168 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:58.786796 | orchestrator | 2025-06-02 19:56:58.788059 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 19:56:58.788696 | orchestrator | Monday 02 June 2025 19:56:58 +0000 (0:00:00.125) 0:00:18.964 *********** 2025-06-02 19:56:58.913656 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:58.913915 | orchestrator | 2025-06-02 19:56:58.914641 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 19:56:58.915104 | orchestrator | Monday 02 June 2025 19:56:58 +0000 (0:00:00.129) 0:00:19.093 *********** 2025-06-02 19:56:59.048977 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:59.049119 | orchestrator | 2025-06-02 19:56:59.049251 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 19:56:59.050745 | orchestrator | Monday 02 June 2025 19:56:59 +0000 (0:00:00.136) 0:00:19.229 *********** 2025-06-02 19:56:59.198097 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:59.198297 | orchestrator | 2025-06-02 19:56:59.199336 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 19:56:59.200563 | orchestrator | Monday 02 June 2025 19:56:59 +0000 (0:00:00.145) 0:00:19.374 *********** 2025-06-02 19:56:59.347854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:59.348220 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:59.350617 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:59.351691 | orchestrator | 2025-06-02 19:56:59.352349 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 19:56:59.353053 | orchestrator | Monday 02 June 2025 19:56:59 +0000 (0:00:00.152) 0:00:19.527 *********** 2025-06-02 19:56:59.693794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:59.693963 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:59.696576 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:59.697816 | orchestrator | 2025-06-02 19:56:59.697840 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 19:56:59.697852 | orchestrator | Monday 02 June 2025 19:56:59 +0000 (0:00:00.345) 0:00:19.873 *********** 2025-06-02 19:56:59.860001 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:56:59.862955 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:56:59.864091 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:56:59.864659 | orchestrator | 2025-06-02 19:56:59.865786 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 19:56:59.866648 | orchestrator | Monday 02 June 2025 19:56:59 +0000 (0:00:00.164) 0:00:20.038 *********** 2025-06-02 19:57:00.013803 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:57:00.013898 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:57:00.014148 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:57:00.015412 | orchestrator | 2025-06-02 19:57:00.017124 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 19:57:00.017428 | orchestrator | Monday 02 June 2025 19:57:00 +0000 (0:00:00.155) 0:00:20.193 *********** 2025-06-02 19:57:00.171632 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:57:00.172864 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:57:00.173716 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:57:00.175868 | orchestrator | 2025-06-02 19:57:00.176549 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 19:57:00.177586 | orchestrator | Monday 02 June 2025 19:57:00 +0000 (0:00:00.158) 0:00:20.351 *********** 2025-06-02 19:57:00.339216 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:57:00.340243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:57:00.341416 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:57:00.341690 | orchestrator | 2025-06-02 19:57:00.343200 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 19:57:00.343902 | orchestrator | Monday 02 June 2025 19:57:00 +0000 (0:00:00.168) 0:00:20.519 *********** 2025-06-02 19:57:00.483687 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:57:00.483956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:57:00.484141 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:57:00.485697 | orchestrator | 2025-06-02 19:57:00.486102 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 19:57:00.486875 | orchestrator | Monday 02 June 2025 19:57:00 +0000 (0:00:00.144) 0:00:20.664 *********** 2025-06-02 19:57:00.641199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:57:00.641375 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:57:00.641967 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:57:00.642605 | orchestrator | 2025-06-02 19:57:00.643037 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 19:57:00.643594 | orchestrator | Monday 02 June 2025 19:57:00 +0000 (0:00:00.157) 0:00:20.822 *********** 2025-06-02 19:57:01.132980 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:57:01.133736 | orchestrator | 2025-06-02 19:57:01.134878 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 19:57:01.136283 | orchestrator | Monday 02 June 2025 19:57:01 +0000 (0:00:00.490) 0:00:21.312 *********** 2025-06-02 19:57:01.624408 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:57:01.625618 | orchestrator | 2025-06-02 19:57:01.627185 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 19:57:01.627812 | orchestrator | Monday 02 June 2025 19:57:01 +0000 (0:00:00.489) 0:00:21.802 *********** 2025-06-02 19:57:01.780858 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:57:01.782721 | orchestrator | 2025-06-02 19:57:01.785120 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 19:57:01.785500 | orchestrator | Monday 02 June 2025 19:57:01 +0000 (0:00:00.158) 0:00:21.961 *********** 2025-06-02 19:57:01.964050 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'vg_name': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'}) 2025-06-02 19:57:01.964517 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'vg_name': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'}) 2025-06-02 19:57:01.964562 | orchestrator | 2025-06-02 19:57:01.965069 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 19:57:01.965425 | orchestrator | Monday 02 June 2025 19:57:01 +0000 (0:00:00.183) 0:00:22.145 *********** 2025-06-02 19:57:02.107137 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:57:02.107925 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:57:02.109171 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:57:02.110161 | orchestrator | 2025-06-02 19:57:02.110838 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 19:57:02.111703 | orchestrator | Monday 02 June 2025 19:57:02 +0000 (0:00:00.142) 0:00:22.287 *********** 2025-06-02 19:57:02.505256 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:57:02.505419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:57:02.505437 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:57:02.505451 | orchestrator | 2025-06-02 19:57:02.505464 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 19:57:02.505476 | orchestrator | Monday 02 June 2025 19:57:02 +0000 (0:00:00.394) 0:00:22.682 *********** 2025-06-02 19:57:02.648142 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'})  2025-06-02 19:57:02.649850 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'})  2025-06-02 19:57:02.650892 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:57:02.652484 | orchestrator | 2025-06-02 19:57:02.653687 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 19:57:02.654463 | orchestrator | Monday 02 June 2025 19:57:02 +0000 (0:00:00.146) 0:00:22.828 *********** 2025-06-02 19:57:02.924932 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:57:02.925506 | orchestrator |  "lvm_report": { 2025-06-02 19:57:02.926602 | orchestrator |  "lv": [ 2025-06-02 19:57:02.928761 | orchestrator |  { 2025-06-02 19:57:02.929254 | orchestrator |  "lv_name": "osd-block-5468daec-208d-5ea7-b544-bcde6bebed84", 2025-06-02 19:57:02.929868 | orchestrator |  "vg_name": "ceph-5468daec-208d-5ea7-b544-bcde6bebed84" 2025-06-02 19:57:02.930487 | orchestrator |  }, 2025-06-02 19:57:02.931008 | orchestrator |  { 2025-06-02 19:57:02.931439 | orchestrator |  "lv_name": "osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd", 2025-06-02 19:57:02.932151 | orchestrator |  "vg_name": "ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd" 2025-06-02 19:57:02.932901 | orchestrator |  } 2025-06-02 19:57:02.933296 | orchestrator |  ], 2025-06-02 19:57:02.933788 | orchestrator |  "pv": [ 2025-06-02 19:57:02.934188 | orchestrator |  { 2025-06-02 19:57:02.934642 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 19:57:02.935052 | orchestrator |  "vg_name": "ceph-5468daec-208d-5ea7-b544-bcde6bebed84" 2025-06-02 19:57:02.935599 | orchestrator |  }, 2025-06-02 19:57:02.935745 | orchestrator |  { 2025-06-02 19:57:02.936326 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 19:57:02.936663 | orchestrator |  "vg_name": "ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd" 2025-06-02 19:57:02.937369 | orchestrator |  } 2025-06-02 19:57:02.937561 | orchestrator |  ] 2025-06-02 19:57:02.938528 | orchestrator |  } 2025-06-02 19:57:02.938788 | orchestrator | } 2025-06-02 19:57:02.939747 | orchestrator | 2025-06-02 19:57:02.940242 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 19:57:02.940593 | orchestrator | 2025-06-02 19:57:02.940808 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:57:02.941201 | orchestrator | Monday 02 June 2025 19:57:02 +0000 (0:00:00.276) 0:00:23.104 *********** 2025-06-02 19:57:03.203198 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 19:57:03.203455 | orchestrator | 2025-06-02 19:57:03.204447 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:57:03.205195 | orchestrator | Monday 02 June 2025 19:57:03 +0000 (0:00:00.279) 0:00:23.384 *********** 2025-06-02 19:57:03.435413 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:57:03.436035 | orchestrator | 2025-06-02 19:57:03.437109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:03.438055 | orchestrator | Monday 02 June 2025 19:57:03 +0000 (0:00:00.231) 0:00:23.615 *********** 2025-06-02 19:57:03.832459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 19:57:03.834667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 19:57:03.834686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 19:57:03.835510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 19:57:03.836638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 19:57:03.837262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 19:57:03.837996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 19:57:03.838795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 19:57:03.840691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 19:57:03.841194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 19:57:03.841846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 19:57:03.842475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 19:57:03.843113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 19:57:03.843712 | orchestrator | 2025-06-02 19:57:03.844693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:03.844872 | orchestrator | Monday 02 June 2025 19:57:03 +0000 (0:00:00.395) 0:00:24.010 *********** 2025-06-02 19:57:04.034912 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:04.035103 | orchestrator | 2025-06-02 19:57:04.035253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:04.036187 | orchestrator | Monday 02 June 2025 19:57:04 +0000 (0:00:00.203) 0:00:24.214 *********** 2025-06-02 19:57:04.242108 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:04.242323 | orchestrator | 2025-06-02 19:57:04.242577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:04.243558 | orchestrator | Monday 02 June 2025 19:57:04 +0000 (0:00:00.205) 0:00:24.419 *********** 2025-06-02 19:57:04.434072 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:04.434822 | orchestrator | 2025-06-02 19:57:04.435560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:04.436415 | orchestrator | Monday 02 June 2025 19:57:04 +0000 (0:00:00.194) 0:00:24.614 *********** 2025-06-02 19:57:05.012790 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:05.014207 | orchestrator | 2025-06-02 19:57:05.016240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:05.016283 | orchestrator | Monday 02 June 2025 19:57:05 +0000 (0:00:00.577) 0:00:25.192 *********** 2025-06-02 19:57:05.241590 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:05.242525 | orchestrator | 2025-06-02 19:57:05.243261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:05.243927 | orchestrator | Monday 02 June 2025 19:57:05 +0000 (0:00:00.229) 0:00:25.422 *********** 2025-06-02 19:57:05.428936 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:05.429320 | orchestrator | 2025-06-02 19:57:05.430001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:05.430885 | orchestrator | Monday 02 June 2025 19:57:05 +0000 (0:00:00.187) 0:00:25.609 *********** 2025-06-02 19:57:05.626979 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:05.627558 | orchestrator | 2025-06-02 19:57:05.628572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:05.629319 | orchestrator | Monday 02 June 2025 19:57:05 +0000 (0:00:00.197) 0:00:25.807 *********** 2025-06-02 19:57:05.846368 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:05.847470 | orchestrator | 2025-06-02 19:57:05.849252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:05.850270 | orchestrator | Monday 02 June 2025 19:57:05 +0000 (0:00:00.218) 0:00:26.026 *********** 2025-06-02 19:57:06.277905 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d) 2025-06-02 19:57:06.277981 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d) 2025-06-02 19:57:06.277987 | orchestrator | 2025-06-02 19:57:06.277993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:06.277997 | orchestrator | Monday 02 June 2025 19:57:06 +0000 (0:00:00.426) 0:00:26.452 *********** 2025-06-02 19:57:06.672918 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696) 2025-06-02 19:57:06.674084 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696) 2025-06-02 19:57:06.677847 | orchestrator | 2025-06-02 19:57:06.677879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:06.681966 | orchestrator | Monday 02 June 2025 19:57:06 +0000 (0:00:00.399) 0:00:26.852 *********** 2025-06-02 19:57:07.093477 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4) 2025-06-02 19:57:07.093713 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4) 2025-06-02 19:57:07.093862 | orchestrator | 2025-06-02 19:57:07.094483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:07.094980 | orchestrator | Monday 02 June 2025 19:57:07 +0000 (0:00:00.419) 0:00:27.272 *********** 2025-06-02 19:57:07.544450 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db) 2025-06-02 19:57:07.545182 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db) 2025-06-02 19:57:07.546365 | orchestrator | 2025-06-02 19:57:07.547142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:07.547999 | orchestrator | Monday 02 June 2025 19:57:07 +0000 (0:00:00.451) 0:00:27.723 *********** 2025-06-02 19:57:07.878602 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:57:07.879115 | orchestrator | 2025-06-02 19:57:07.879762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:07.880638 | orchestrator | Monday 02 June 2025 19:57:07 +0000 (0:00:00.335) 0:00:28.059 *********** 2025-06-02 19:57:08.479253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 19:57:08.479924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 19:57:08.480809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 19:57:08.481965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 19:57:08.483294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 19:57:08.484132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 19:57:08.484880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 19:57:08.485573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 19:57:08.486437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 19:57:08.487153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 19:57:08.487399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 19:57:08.488295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 19:57:08.488837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 19:57:08.489536 | orchestrator | 2025-06-02 19:57:08.489866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:08.490557 | orchestrator | Monday 02 June 2025 19:57:08 +0000 (0:00:00.599) 0:00:28.658 *********** 2025-06-02 19:57:08.679715 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:08.680261 | orchestrator | 2025-06-02 19:57:08.686288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:08.686591 | orchestrator | Monday 02 June 2025 19:57:08 +0000 (0:00:00.201) 0:00:28.860 *********** 2025-06-02 19:57:08.876433 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:08.876544 | orchestrator | 2025-06-02 19:57:08.877401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:08.878386 | orchestrator | Monday 02 June 2025 19:57:08 +0000 (0:00:00.195) 0:00:29.055 *********** 2025-06-02 19:57:09.081918 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:09.082601 | orchestrator | 2025-06-02 19:57:09.082830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:09.083418 | orchestrator | Monday 02 June 2025 19:57:09 +0000 (0:00:00.204) 0:00:29.260 *********** 2025-06-02 19:57:09.284535 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:09.284785 | orchestrator | 2025-06-02 19:57:09.287066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:09.287778 | orchestrator | Monday 02 June 2025 19:57:09 +0000 (0:00:00.203) 0:00:29.464 *********** 2025-06-02 19:57:09.476907 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:09.477498 | orchestrator | 2025-06-02 19:57:09.478191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:09.478844 | orchestrator | Monday 02 June 2025 19:57:09 +0000 (0:00:00.193) 0:00:29.657 *********** 2025-06-02 19:57:09.685322 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:09.686129 | orchestrator | 2025-06-02 19:57:09.687734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:09.688675 | orchestrator | Monday 02 June 2025 19:57:09 +0000 (0:00:00.208) 0:00:29.866 *********** 2025-06-02 19:57:09.903905 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:09.905617 | orchestrator | 2025-06-02 19:57:09.906114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:09.907082 | orchestrator | Monday 02 June 2025 19:57:09 +0000 (0:00:00.218) 0:00:30.084 *********** 2025-06-02 19:57:10.154965 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:10.155560 | orchestrator | 2025-06-02 19:57:10.156795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:10.158514 | orchestrator | Monday 02 June 2025 19:57:10 +0000 (0:00:00.249) 0:00:30.334 *********** 2025-06-02 19:57:11.057053 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 19:57:11.057876 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 19:57:11.059669 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 19:57:11.061583 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 19:57:11.062526 | orchestrator | 2025-06-02 19:57:11.063228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:11.064419 | orchestrator | Monday 02 June 2025 19:57:11 +0000 (0:00:00.901) 0:00:31.235 *********** 2025-06-02 19:57:11.270222 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:11.270421 | orchestrator | 2025-06-02 19:57:11.271964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:11.273039 | orchestrator | Monday 02 June 2025 19:57:11 +0000 (0:00:00.214) 0:00:31.450 *********** 2025-06-02 19:57:11.462548 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:11.463157 | orchestrator | 2025-06-02 19:57:11.463970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:11.465036 | orchestrator | Monday 02 June 2025 19:57:11 +0000 (0:00:00.192) 0:00:31.642 *********** 2025-06-02 19:57:12.156128 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:12.157149 | orchestrator | 2025-06-02 19:57:12.157525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:12.158867 | orchestrator | Monday 02 June 2025 19:57:12 +0000 (0:00:00.693) 0:00:32.335 *********** 2025-06-02 19:57:12.395151 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:12.395316 | orchestrator | 2025-06-02 19:57:12.396051 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 19:57:12.396940 | orchestrator | Monday 02 June 2025 19:57:12 +0000 (0:00:00.239) 0:00:32.575 *********** 2025-06-02 19:57:12.532902 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:12.533507 | orchestrator | 2025-06-02 19:57:12.534511 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 19:57:12.535207 | orchestrator | Monday 02 June 2025 19:57:12 +0000 (0:00:00.137) 0:00:32.713 *********** 2025-06-02 19:57:12.702670 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b573976-5050-5314-b52d-708d81144fb3'}}) 2025-06-02 19:57:12.702887 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dc535ca-7422-5c6b-b80a-593b3887af48'}}) 2025-06-02 19:57:12.703805 | orchestrator | 2025-06-02 19:57:12.703960 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 19:57:12.705597 | orchestrator | Monday 02 June 2025 19:57:12 +0000 (0:00:00.168) 0:00:32.881 *********** 2025-06-02 19:57:14.572658 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'}) 2025-06-02 19:57:14.573079 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'}) 2025-06-02 19:57:14.573729 | orchestrator | 2025-06-02 19:57:14.573934 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 19:57:14.574556 | orchestrator | Monday 02 June 2025 19:57:14 +0000 (0:00:01.869) 0:00:34.751 *********** 2025-06-02 19:57:14.729500 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:14.729705 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:14.730571 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:14.734791 | orchestrator | 2025-06-02 19:57:14.734855 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 19:57:14.734863 | orchestrator | Monday 02 June 2025 19:57:14 +0000 (0:00:00.157) 0:00:34.908 *********** 2025-06-02 19:57:16.112947 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'}) 2025-06-02 19:57:16.113064 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'}) 2025-06-02 19:57:16.113228 | orchestrator | 2025-06-02 19:57:16.113795 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 19:57:16.114269 | orchestrator | Monday 02 June 2025 19:57:16 +0000 (0:00:01.382) 0:00:36.291 *********** 2025-06-02 19:57:16.275141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:16.275650 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:16.277315 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:16.278532 | orchestrator | 2025-06-02 19:57:16.279933 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 19:57:16.280961 | orchestrator | Monday 02 June 2025 19:57:16 +0000 (0:00:00.164) 0:00:36.455 *********** 2025-06-02 19:57:16.426479 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:16.426848 | orchestrator | 2025-06-02 19:57:16.427989 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 19:57:16.429260 | orchestrator | Monday 02 June 2025 19:57:16 +0000 (0:00:00.151) 0:00:36.606 *********** 2025-06-02 19:57:16.597631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:16.598296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:16.599474 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:16.600403 | orchestrator | 2025-06-02 19:57:16.600780 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 19:57:16.601432 | orchestrator | Monday 02 June 2025 19:57:16 +0000 (0:00:00.170) 0:00:36.777 *********** 2025-06-02 19:57:16.749053 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:16.750116 | orchestrator | 2025-06-02 19:57:16.751510 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 19:57:16.753175 | orchestrator | Monday 02 June 2025 19:57:16 +0000 (0:00:00.149) 0:00:36.927 *********** 2025-06-02 19:57:16.912378 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:16.912511 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:16.914171 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:16.915632 | orchestrator | 2025-06-02 19:57:16.916172 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 19:57:16.917277 | orchestrator | Monday 02 June 2025 19:57:16 +0000 (0:00:00.164) 0:00:37.091 *********** 2025-06-02 19:57:17.284828 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:17.284961 | orchestrator | 2025-06-02 19:57:17.285650 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 19:57:17.285787 | orchestrator | Monday 02 June 2025 19:57:17 +0000 (0:00:00.371) 0:00:37.462 *********** 2025-06-02 19:57:17.434250 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:17.434878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:17.436223 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:17.436658 | orchestrator | 2025-06-02 19:57:17.437522 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 19:57:17.437929 | orchestrator | Monday 02 June 2025 19:57:17 +0000 (0:00:00.152) 0:00:37.615 *********** 2025-06-02 19:57:17.579784 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:57:17.580699 | orchestrator | 2025-06-02 19:57:17.580731 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 19:57:17.581097 | orchestrator | Monday 02 June 2025 19:57:17 +0000 (0:00:00.144) 0:00:37.760 *********** 2025-06-02 19:57:17.751488 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:17.753014 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:17.754309 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:17.755500 | orchestrator | 2025-06-02 19:57:17.755593 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 19:57:17.756274 | orchestrator | Monday 02 June 2025 19:57:17 +0000 (0:00:00.166) 0:00:37.926 *********** 2025-06-02 19:57:17.925542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:17.925718 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:17.925735 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:17.925746 | orchestrator | 2025-06-02 19:57:17.926575 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 19:57:17.927222 | orchestrator | Monday 02 June 2025 19:57:17 +0000 (0:00:00.179) 0:00:38.105 *********** 2025-06-02 19:57:18.089740 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:18.090887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:18.091002 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:18.091514 | orchestrator | 2025-06-02 19:57:18.092079 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 19:57:18.092550 | orchestrator | Monday 02 June 2025 19:57:18 +0000 (0:00:00.164) 0:00:38.270 *********** 2025-06-02 19:57:18.228648 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:18.228884 | orchestrator | 2025-06-02 19:57:18.229920 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 19:57:18.230661 | orchestrator | Monday 02 June 2025 19:57:18 +0000 (0:00:00.138) 0:00:38.408 *********** 2025-06-02 19:57:18.372132 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:18.372484 | orchestrator | 2025-06-02 19:57:18.373250 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 19:57:18.374117 | orchestrator | Monday 02 June 2025 19:57:18 +0000 (0:00:00.142) 0:00:38.551 *********** 2025-06-02 19:57:18.506477 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:18.506699 | orchestrator | 2025-06-02 19:57:18.506995 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 19:57:18.507897 | orchestrator | Monday 02 June 2025 19:57:18 +0000 (0:00:00.134) 0:00:38.685 *********** 2025-06-02 19:57:18.641971 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:57:18.643014 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 19:57:18.643570 | orchestrator | } 2025-06-02 19:57:18.644620 | orchestrator | 2025-06-02 19:57:18.646843 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 19:57:18.646887 | orchestrator | Monday 02 June 2025 19:57:18 +0000 (0:00:00.136) 0:00:38.822 *********** 2025-06-02 19:57:18.797728 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:57:18.798302 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 19:57:18.799026 | orchestrator | } 2025-06-02 19:57:18.799787 | orchestrator | 2025-06-02 19:57:18.801720 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 19:57:18.802573 | orchestrator | Monday 02 June 2025 19:57:18 +0000 (0:00:00.152) 0:00:38.975 *********** 2025-06-02 19:57:18.937584 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:57:18.938404 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 19:57:18.939635 | orchestrator | } 2025-06-02 19:57:18.939986 | orchestrator | 2025-06-02 19:57:18.941576 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 19:57:18.942483 | orchestrator | Monday 02 June 2025 19:57:18 +0000 (0:00:00.140) 0:00:39.116 *********** 2025-06-02 19:57:19.715570 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:57:19.716257 | orchestrator | 2025-06-02 19:57:19.720209 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 19:57:19.720271 | orchestrator | Monday 02 June 2025 19:57:19 +0000 (0:00:00.777) 0:00:39.894 *********** 2025-06-02 19:57:20.240830 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:57:20.242850 | orchestrator | 2025-06-02 19:57:20.243032 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 19:57:20.243865 | orchestrator | Monday 02 June 2025 19:57:20 +0000 (0:00:00.524) 0:00:40.418 *********** 2025-06-02 19:57:20.755880 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:57:20.756868 | orchestrator | 2025-06-02 19:57:20.757873 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 19:57:20.758568 | orchestrator | Monday 02 June 2025 19:57:20 +0000 (0:00:00.515) 0:00:40.934 *********** 2025-06-02 19:57:20.935240 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:57:20.936196 | orchestrator | 2025-06-02 19:57:20.936256 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 19:57:20.936270 | orchestrator | Monday 02 June 2025 19:57:20 +0000 (0:00:00.181) 0:00:41.115 *********** 2025-06-02 19:57:21.053630 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:21.054998 | orchestrator | 2025-06-02 19:57:21.055413 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 19:57:21.056444 | orchestrator | Monday 02 June 2025 19:57:21 +0000 (0:00:00.117) 0:00:41.233 *********** 2025-06-02 19:57:21.169450 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:21.170237 | orchestrator | 2025-06-02 19:57:21.171457 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 19:57:21.172431 | orchestrator | Monday 02 June 2025 19:57:21 +0000 (0:00:00.114) 0:00:41.347 *********** 2025-06-02 19:57:21.357398 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:57:21.359684 | orchestrator |  "vgs_report": { 2025-06-02 19:57:21.360715 | orchestrator |  "vg": [] 2025-06-02 19:57:21.361526 | orchestrator |  } 2025-06-02 19:57:21.362397 | orchestrator | } 2025-06-02 19:57:21.362954 | orchestrator | 2025-06-02 19:57:21.363875 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 19:57:21.364369 | orchestrator | Monday 02 June 2025 19:57:21 +0000 (0:00:00.187) 0:00:41.535 *********** 2025-06-02 19:57:21.499487 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:21.500462 | orchestrator | 2025-06-02 19:57:21.501085 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 19:57:21.501508 | orchestrator | Monday 02 June 2025 19:57:21 +0000 (0:00:00.144) 0:00:41.679 *********** 2025-06-02 19:57:21.620638 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:21.622129 | orchestrator | 2025-06-02 19:57:21.622201 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 19:57:21.623783 | orchestrator | Monday 02 June 2025 19:57:21 +0000 (0:00:00.121) 0:00:41.801 *********** 2025-06-02 19:57:21.758365 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:21.759459 | orchestrator | 2025-06-02 19:57:21.761609 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 19:57:21.762157 | orchestrator | Monday 02 June 2025 19:57:21 +0000 (0:00:00.135) 0:00:41.936 *********** 2025-06-02 19:57:21.897632 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:21.899193 | orchestrator | 2025-06-02 19:57:21.900693 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 19:57:21.901412 | orchestrator | Monday 02 June 2025 19:57:21 +0000 (0:00:00.141) 0:00:42.077 *********** 2025-06-02 19:57:22.028188 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:22.029540 | orchestrator | 2025-06-02 19:57:22.030120 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 19:57:22.031306 | orchestrator | Monday 02 June 2025 19:57:22 +0000 (0:00:00.130) 0:00:42.208 *********** 2025-06-02 19:57:22.379614 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:22.380477 | orchestrator | 2025-06-02 19:57:22.381653 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 19:57:22.382505 | orchestrator | Monday 02 June 2025 19:57:22 +0000 (0:00:00.350) 0:00:42.558 *********** 2025-06-02 19:57:22.520966 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:22.521267 | orchestrator | 2025-06-02 19:57:22.521729 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 19:57:22.522736 | orchestrator | Monday 02 June 2025 19:57:22 +0000 (0:00:00.143) 0:00:42.701 *********** 2025-06-02 19:57:22.658285 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:22.658748 | orchestrator | 2025-06-02 19:57:22.659916 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 19:57:22.660375 | orchestrator | Monday 02 June 2025 19:57:22 +0000 (0:00:00.136) 0:00:42.838 *********** 2025-06-02 19:57:22.805375 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:22.806573 | orchestrator | 2025-06-02 19:57:22.809618 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 19:57:22.809692 | orchestrator | Monday 02 June 2025 19:57:22 +0000 (0:00:00.146) 0:00:42.984 *********** 2025-06-02 19:57:22.936392 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:22.937819 | orchestrator | 2025-06-02 19:57:22.938960 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 19:57:22.940830 | orchestrator | Monday 02 June 2025 19:57:22 +0000 (0:00:00.130) 0:00:43.115 *********** 2025-06-02 19:57:23.072604 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:23.073896 | orchestrator | 2025-06-02 19:57:23.074696 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 19:57:23.075654 | orchestrator | Monday 02 June 2025 19:57:23 +0000 (0:00:00.135) 0:00:43.251 *********** 2025-06-02 19:57:23.226260 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:23.226490 | orchestrator | 2025-06-02 19:57:23.227153 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 19:57:23.227808 | orchestrator | Monday 02 June 2025 19:57:23 +0000 (0:00:00.155) 0:00:43.406 *********** 2025-06-02 19:57:23.381156 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:23.383045 | orchestrator | 2025-06-02 19:57:23.384117 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 19:57:23.385639 | orchestrator | Monday 02 June 2025 19:57:23 +0000 (0:00:00.153) 0:00:43.560 *********** 2025-06-02 19:57:23.527712 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:23.530293 | orchestrator | 2025-06-02 19:57:23.531380 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 19:57:23.532392 | orchestrator | Monday 02 June 2025 19:57:23 +0000 (0:00:00.146) 0:00:43.706 *********** 2025-06-02 19:57:23.683200 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:23.684718 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:23.686366 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:23.687448 | orchestrator | 2025-06-02 19:57:23.687767 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 19:57:23.688113 | orchestrator | Monday 02 June 2025 19:57:23 +0000 (0:00:00.155) 0:00:43.862 *********** 2025-06-02 19:57:23.859001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:23.859861 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:23.861593 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:23.862270 | orchestrator | 2025-06-02 19:57:23.862542 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 19:57:23.863079 | orchestrator | Monday 02 June 2025 19:57:23 +0000 (0:00:00.173) 0:00:44.035 *********** 2025-06-02 19:57:24.014121 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:24.014443 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:24.017102 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:24.017749 | orchestrator | 2025-06-02 19:57:24.018231 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 19:57:24.018878 | orchestrator | Monday 02 June 2025 19:57:24 +0000 (0:00:00.156) 0:00:44.192 *********** 2025-06-02 19:57:24.383602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:24.383804 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:24.385136 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:24.386688 | orchestrator | 2025-06-02 19:57:24.388238 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 19:57:24.388677 | orchestrator | Monday 02 June 2025 19:57:24 +0000 (0:00:00.369) 0:00:44.561 *********** 2025-06-02 19:57:24.552956 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:24.553182 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:24.554889 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:24.555015 | orchestrator | 2025-06-02 19:57:24.555750 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 19:57:24.556531 | orchestrator | Monday 02 June 2025 19:57:24 +0000 (0:00:00.171) 0:00:44.733 *********** 2025-06-02 19:57:24.714511 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:24.715448 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:24.715566 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:24.718593 | orchestrator | 2025-06-02 19:57:24.722013 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 19:57:24.722895 | orchestrator | Monday 02 June 2025 19:57:24 +0000 (0:00:00.161) 0:00:44.894 *********** 2025-06-02 19:57:24.893478 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:24.894756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:24.895815 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:24.896829 | orchestrator | 2025-06-02 19:57:24.897941 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 19:57:24.898668 | orchestrator | Monday 02 June 2025 19:57:24 +0000 (0:00:00.177) 0:00:45.072 *********** 2025-06-02 19:57:25.045985 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:25.047953 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:25.050053 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:25.050850 | orchestrator | 2025-06-02 19:57:25.051529 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 19:57:25.052171 | orchestrator | Monday 02 June 2025 19:57:25 +0000 (0:00:00.153) 0:00:45.225 *********** 2025-06-02 19:57:25.549649 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:57:25.549933 | orchestrator | 2025-06-02 19:57:25.551456 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 19:57:25.551731 | orchestrator | Monday 02 June 2025 19:57:25 +0000 (0:00:00.503) 0:00:45.728 *********** 2025-06-02 19:57:26.073103 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:57:26.074197 | orchestrator | 2025-06-02 19:57:26.075759 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 19:57:26.076917 | orchestrator | Monday 02 June 2025 19:57:26 +0000 (0:00:00.523) 0:00:46.251 *********** 2025-06-02 19:57:26.223223 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:57:26.223544 | orchestrator | 2025-06-02 19:57:26.224383 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 19:57:26.225987 | orchestrator | Monday 02 June 2025 19:57:26 +0000 (0:00:00.151) 0:00:46.403 *********** 2025-06-02 19:57:26.388832 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'vg_name': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'}) 2025-06-02 19:57:26.389695 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'vg_name': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'}) 2025-06-02 19:57:26.390630 | orchestrator | 2025-06-02 19:57:26.391012 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 19:57:26.392069 | orchestrator | Monday 02 June 2025 19:57:26 +0000 (0:00:00.165) 0:00:46.569 *********** 2025-06-02 19:57:26.543685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:26.543842 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:26.545437 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:26.545980 | orchestrator | 2025-06-02 19:57:26.546637 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 19:57:26.547075 | orchestrator | Monday 02 June 2025 19:57:26 +0000 (0:00:00.154) 0:00:46.723 *********** 2025-06-02 19:57:26.698400 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:26.698856 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:26.699652 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:26.700045 | orchestrator | 2025-06-02 19:57:26.701052 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 19:57:26.701415 | orchestrator | Monday 02 June 2025 19:57:26 +0000 (0:00:00.155) 0:00:46.879 *********** 2025-06-02 19:57:26.857011 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'})  2025-06-02 19:57:26.857385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'})  2025-06-02 19:57:26.858065 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:26.861384 | orchestrator | 2025-06-02 19:57:26.864673 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 19:57:26.865880 | orchestrator | Monday 02 June 2025 19:57:26 +0000 (0:00:00.157) 0:00:47.036 *********** 2025-06-02 19:57:27.375349 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:57:27.375601 | orchestrator |  "lvm_report": { 2025-06-02 19:57:27.375626 | orchestrator |  "lv": [ 2025-06-02 19:57:27.375968 | orchestrator |  { 2025-06-02 19:57:27.376591 | orchestrator |  "lv_name": "osd-block-0b573976-5050-5314-b52d-708d81144fb3", 2025-06-02 19:57:27.376650 | orchestrator |  "vg_name": "ceph-0b573976-5050-5314-b52d-708d81144fb3" 2025-06-02 19:57:27.377751 | orchestrator |  }, 2025-06-02 19:57:27.377855 | orchestrator |  { 2025-06-02 19:57:27.377870 | orchestrator |  "lv_name": "osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48", 2025-06-02 19:57:27.378085 | orchestrator |  "vg_name": "ceph-1dc535ca-7422-5c6b-b80a-593b3887af48" 2025-06-02 19:57:27.378982 | orchestrator |  } 2025-06-02 19:57:27.379052 | orchestrator |  ], 2025-06-02 19:57:27.379348 | orchestrator |  "pv": [ 2025-06-02 19:57:27.379546 | orchestrator |  { 2025-06-02 19:57:27.379559 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 19:57:27.379821 | orchestrator |  "vg_name": "ceph-0b573976-5050-5314-b52d-708d81144fb3" 2025-06-02 19:57:27.380962 | orchestrator |  }, 2025-06-02 19:57:27.380988 | orchestrator |  { 2025-06-02 19:57:27.381019 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 19:57:27.381377 | orchestrator |  "vg_name": "ceph-1dc535ca-7422-5c6b-b80a-593b3887af48" 2025-06-02 19:57:27.381561 | orchestrator |  } 2025-06-02 19:57:27.382479 | orchestrator |  ] 2025-06-02 19:57:27.382679 | orchestrator |  } 2025-06-02 19:57:27.384031 | orchestrator | } 2025-06-02 19:57:27.384059 | orchestrator | 2025-06-02 19:57:27.384690 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 19:57:27.384966 | orchestrator | 2025-06-02 19:57:27.385955 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:57:27.386098 | orchestrator | Monday 02 June 2025 19:57:27 +0000 (0:00:00.519) 0:00:47.556 *********** 2025-06-02 19:57:27.643381 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 19:57:27.644300 | orchestrator | 2025-06-02 19:57:27.645550 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:57:27.646562 | orchestrator | Monday 02 June 2025 19:57:27 +0000 (0:00:00.267) 0:00:47.823 *********** 2025-06-02 19:57:27.870253 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:57:27.871744 | orchestrator | 2025-06-02 19:57:27.873852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:27.875625 | orchestrator | Monday 02 June 2025 19:57:27 +0000 (0:00:00.226) 0:00:48.050 *********** 2025-06-02 19:57:28.287279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 19:57:28.287443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 19:57:28.287547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 19:57:28.288516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 19:57:28.288656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 19:57:28.291519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 19:57:28.292213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 19:57:28.292837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 19:57:28.293461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 19:57:28.294221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 19:57:28.294762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 19:57:28.295762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 19:57:28.295934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 19:57:28.296625 | orchestrator | 2025-06-02 19:57:28.297179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:28.297716 | orchestrator | Monday 02 June 2025 19:57:28 +0000 (0:00:00.414) 0:00:48.465 *********** 2025-06-02 19:57:28.492194 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:28.493245 | orchestrator | 2025-06-02 19:57:28.495663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:28.496020 | orchestrator | Monday 02 June 2025 19:57:28 +0000 (0:00:00.207) 0:00:48.672 *********** 2025-06-02 19:57:28.684859 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:28.685687 | orchestrator | 2025-06-02 19:57:28.686548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:28.687355 | orchestrator | Monday 02 June 2025 19:57:28 +0000 (0:00:00.192) 0:00:48.865 *********** 2025-06-02 19:57:28.897514 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:28.898502 | orchestrator | 2025-06-02 19:57:28.899390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:28.901610 | orchestrator | Monday 02 June 2025 19:57:28 +0000 (0:00:00.212) 0:00:49.077 *********** 2025-06-02 19:57:29.107529 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:29.110006 | orchestrator | 2025-06-02 19:57:29.112933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:29.112959 | orchestrator | Monday 02 June 2025 19:57:29 +0000 (0:00:00.208) 0:00:49.286 *********** 2025-06-02 19:57:29.331094 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:29.331539 | orchestrator | 2025-06-02 19:57:29.332864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:29.332906 | orchestrator | Monday 02 June 2025 19:57:29 +0000 (0:00:00.225) 0:00:49.511 *********** 2025-06-02 19:57:30.064049 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:30.065700 | orchestrator | 2025-06-02 19:57:30.066150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:30.067209 | orchestrator | Monday 02 June 2025 19:57:30 +0000 (0:00:00.730) 0:00:50.242 *********** 2025-06-02 19:57:30.278274 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:30.278761 | orchestrator | 2025-06-02 19:57:30.279634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:30.280847 | orchestrator | Monday 02 June 2025 19:57:30 +0000 (0:00:00.214) 0:00:50.457 *********** 2025-06-02 19:57:30.495921 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:30.496006 | orchestrator | 2025-06-02 19:57:30.496798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:30.498509 | orchestrator | Monday 02 June 2025 19:57:30 +0000 (0:00:00.217) 0:00:50.674 *********** 2025-06-02 19:57:30.967296 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3) 2025-06-02 19:57:30.969541 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3) 2025-06-02 19:57:30.970603 | orchestrator | 2025-06-02 19:57:30.971211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:30.972098 | orchestrator | Monday 02 June 2025 19:57:30 +0000 (0:00:00.470) 0:00:51.145 *********** 2025-06-02 19:57:31.423438 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76) 2025-06-02 19:57:31.424860 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76) 2025-06-02 19:57:31.427399 | orchestrator | 2025-06-02 19:57:31.431254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:31.433465 | orchestrator | Monday 02 June 2025 19:57:31 +0000 (0:00:00.457) 0:00:51.602 *********** 2025-06-02 19:57:31.891526 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6) 2025-06-02 19:57:31.893064 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6) 2025-06-02 19:57:31.893995 | orchestrator | 2025-06-02 19:57:31.895580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:31.895877 | orchestrator | Monday 02 June 2025 19:57:31 +0000 (0:00:00.468) 0:00:52.072 *********** 2025-06-02 19:57:32.426885 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f) 2025-06-02 19:57:32.427057 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f) 2025-06-02 19:57:32.428030 | orchestrator | 2025-06-02 19:57:32.428245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:57:32.428956 | orchestrator | Monday 02 June 2025 19:57:32 +0000 (0:00:00.533) 0:00:52.605 *********** 2025-06-02 19:57:32.813864 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:57:32.814539 | orchestrator | 2025-06-02 19:57:32.814861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:32.816647 | orchestrator | Monday 02 June 2025 19:57:32 +0000 (0:00:00.388) 0:00:52.993 *********** 2025-06-02 19:57:33.278713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 19:57:33.280111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 19:57:33.281761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 19:57:33.282184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 19:57:33.283752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 19:57:33.285057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 19:57:33.285530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 19:57:33.287750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 19:57:33.288521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 19:57:33.289079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 19:57:33.290406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 19:57:33.291115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 19:57:33.291982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 19:57:33.292908 | orchestrator | 2025-06-02 19:57:33.293845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:33.294616 | orchestrator | Monday 02 June 2025 19:57:33 +0000 (0:00:00.463) 0:00:53.457 *********** 2025-06-02 19:57:33.477716 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:33.477846 | orchestrator | 2025-06-02 19:57:33.477871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:33.477885 | orchestrator | Monday 02 June 2025 19:57:33 +0000 (0:00:00.200) 0:00:53.657 *********** 2025-06-02 19:57:33.702527 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:33.703510 | orchestrator | 2025-06-02 19:57:33.705922 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:33.705973 | orchestrator | Monday 02 June 2025 19:57:33 +0000 (0:00:00.224) 0:00:53.882 *********** 2025-06-02 19:57:34.401112 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:34.402497 | orchestrator | 2025-06-02 19:57:34.403083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:34.405016 | orchestrator | Monday 02 June 2025 19:57:34 +0000 (0:00:00.696) 0:00:54.578 *********** 2025-06-02 19:57:34.607135 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:34.608362 | orchestrator | 2025-06-02 19:57:34.608949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:34.610285 | orchestrator | Monday 02 June 2025 19:57:34 +0000 (0:00:00.208) 0:00:54.787 *********** 2025-06-02 19:57:34.832301 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:34.832441 | orchestrator | 2025-06-02 19:57:34.833565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:34.834760 | orchestrator | Monday 02 June 2025 19:57:34 +0000 (0:00:00.223) 0:00:55.010 *********** 2025-06-02 19:57:35.045200 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:35.045630 | orchestrator | 2025-06-02 19:57:35.046080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:35.046801 | orchestrator | Monday 02 June 2025 19:57:35 +0000 (0:00:00.213) 0:00:55.223 *********** 2025-06-02 19:57:35.264147 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:35.265478 | orchestrator | 2025-06-02 19:57:35.267256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:35.268150 | orchestrator | Monday 02 June 2025 19:57:35 +0000 (0:00:00.216) 0:00:55.440 *********** 2025-06-02 19:57:35.459753 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:35.461174 | orchestrator | 2025-06-02 19:57:35.461982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:35.463117 | orchestrator | Monday 02 June 2025 19:57:35 +0000 (0:00:00.199) 0:00:55.639 *********** 2025-06-02 19:57:36.139536 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 19:57:36.140709 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 19:57:36.141719 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 19:57:36.142906 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 19:57:36.143520 | orchestrator | 2025-06-02 19:57:36.144128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:36.146604 | orchestrator | Monday 02 June 2025 19:57:36 +0000 (0:00:00.677) 0:00:56.316 *********** 2025-06-02 19:57:36.371894 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:36.372894 | orchestrator | 2025-06-02 19:57:36.373418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:36.375544 | orchestrator | Monday 02 June 2025 19:57:36 +0000 (0:00:00.234) 0:00:56.551 *********** 2025-06-02 19:57:36.569500 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:36.570701 | orchestrator | 2025-06-02 19:57:36.571701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:36.571964 | orchestrator | Monday 02 June 2025 19:57:36 +0000 (0:00:00.197) 0:00:56.749 *********** 2025-06-02 19:57:36.773120 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:36.773974 | orchestrator | 2025-06-02 19:57:36.774962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:57:36.775678 | orchestrator | Monday 02 June 2025 19:57:36 +0000 (0:00:00.204) 0:00:56.953 *********** 2025-06-02 19:57:36.969139 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:36.969524 | orchestrator | 2025-06-02 19:57:36.970322 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 19:57:36.971435 | orchestrator | Monday 02 June 2025 19:57:36 +0000 (0:00:00.195) 0:00:57.149 *********** 2025-06-02 19:57:37.330823 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:37.331243 | orchestrator | 2025-06-02 19:57:37.331795 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 19:57:37.332209 | orchestrator | Monday 02 June 2025 19:57:37 +0000 (0:00:00.360) 0:00:57.510 *********** 2025-06-02 19:57:37.519670 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1b51fe1f-19f9-5db6-a741-38088f1d71cf'}}) 2025-06-02 19:57:37.520510 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2dc54921-ef42-515a-84de-1f3d0e017dc1'}}) 2025-06-02 19:57:37.521171 | orchestrator | 2025-06-02 19:57:37.522077 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 19:57:37.522788 | orchestrator | Monday 02 June 2025 19:57:37 +0000 (0:00:00.190) 0:00:57.700 *********** 2025-06-02 19:57:39.323473 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'}) 2025-06-02 19:57:39.323639 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'}) 2025-06-02 19:57:39.324368 | orchestrator | 2025-06-02 19:57:39.325168 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 19:57:39.325653 | orchestrator | Monday 02 June 2025 19:57:39 +0000 (0:00:01.802) 0:00:59.502 *********** 2025-06-02 19:57:39.473664 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:39.473892 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:39.475426 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:39.477796 | orchestrator | 2025-06-02 19:57:39.477838 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 19:57:39.478781 | orchestrator | Monday 02 June 2025 19:57:39 +0000 (0:00:00.151) 0:00:59.654 *********** 2025-06-02 19:57:40.674412 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'}) 2025-06-02 19:57:40.674607 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'}) 2025-06-02 19:57:40.675777 | orchestrator | 2025-06-02 19:57:40.675806 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 19:57:40.676456 | orchestrator | Monday 02 June 2025 19:57:40 +0000 (0:00:01.198) 0:01:00.852 *********** 2025-06-02 19:57:40.822779 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:40.823437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:40.824389 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:40.825295 | orchestrator | 2025-06-02 19:57:40.825850 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 19:57:40.826094 | orchestrator | Monday 02 June 2025 19:57:40 +0000 (0:00:00.148) 0:01:01.001 *********** 2025-06-02 19:57:40.954699 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:40.956072 | orchestrator | 2025-06-02 19:57:40.957107 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 19:57:40.958545 | orchestrator | Monday 02 June 2025 19:57:40 +0000 (0:00:00.133) 0:01:01.135 *********** 2025-06-02 19:57:41.114083 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:41.114417 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:41.115153 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:41.115613 | orchestrator | 2025-06-02 19:57:41.116473 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 19:57:41.117795 | orchestrator | Monday 02 June 2025 19:57:41 +0000 (0:00:00.159) 0:01:01.294 *********** 2025-06-02 19:57:41.265913 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:41.266662 | orchestrator | 2025-06-02 19:57:41.267842 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 19:57:41.269809 | orchestrator | Monday 02 June 2025 19:57:41 +0000 (0:00:00.152) 0:01:01.446 *********** 2025-06-02 19:57:41.414077 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:41.415669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:41.415849 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:41.417937 | orchestrator | 2025-06-02 19:57:41.418621 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 19:57:41.419502 | orchestrator | Monday 02 June 2025 19:57:41 +0000 (0:00:00.146) 0:01:01.593 *********** 2025-06-02 19:57:41.549927 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:41.550866 | orchestrator | 2025-06-02 19:57:41.551923 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 19:57:41.552740 | orchestrator | Monday 02 June 2025 19:57:41 +0000 (0:00:00.136) 0:01:01.729 *********** 2025-06-02 19:57:41.696881 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:41.697782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:41.698960 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:41.699698 | orchestrator | 2025-06-02 19:57:41.700726 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 19:57:41.701566 | orchestrator | Monday 02 June 2025 19:57:41 +0000 (0:00:00.147) 0:01:01.877 *********** 2025-06-02 19:57:41.832705 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:57:41.832877 | orchestrator | 2025-06-02 19:57:41.834082 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 19:57:41.834754 | orchestrator | Monday 02 June 2025 19:57:41 +0000 (0:00:00.135) 0:01:02.013 *********** 2025-06-02 19:57:42.177636 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:42.178520 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:42.179492 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:42.181597 | orchestrator | 2025-06-02 19:57:42.181769 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 19:57:42.182194 | orchestrator | Monday 02 June 2025 19:57:42 +0000 (0:00:00.345) 0:01:02.358 *********** 2025-06-02 19:57:42.343445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:42.343724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:42.343932 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:42.345039 | orchestrator | 2025-06-02 19:57:42.345957 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 19:57:42.346467 | orchestrator | Monday 02 June 2025 19:57:42 +0000 (0:00:00.165) 0:01:02.523 *********** 2025-06-02 19:57:42.474357 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:42.474517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:42.475715 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:42.476825 | orchestrator | 2025-06-02 19:57:42.477525 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 19:57:42.478068 | orchestrator | Monday 02 June 2025 19:57:42 +0000 (0:00:00.130) 0:01:02.654 *********** 2025-06-02 19:57:42.592415 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:42.593804 | orchestrator | 2025-06-02 19:57:42.594499 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 19:57:42.595713 | orchestrator | Monday 02 June 2025 19:57:42 +0000 (0:00:00.118) 0:01:02.772 *********** 2025-06-02 19:57:42.717137 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:42.717490 | orchestrator | 2025-06-02 19:57:42.717840 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 19:57:42.719892 | orchestrator | Monday 02 June 2025 19:57:42 +0000 (0:00:00.124) 0:01:02.897 *********** 2025-06-02 19:57:42.840404 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:42.840991 | orchestrator | 2025-06-02 19:57:42.842237 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 19:57:42.843655 | orchestrator | Monday 02 June 2025 19:57:42 +0000 (0:00:00.122) 0:01:03.019 *********** 2025-06-02 19:57:42.979140 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:57:42.980418 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 19:57:42.981156 | orchestrator | } 2025-06-02 19:57:42.982432 | orchestrator | 2025-06-02 19:57:42.983251 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 19:57:42.983928 | orchestrator | Monday 02 June 2025 19:57:42 +0000 (0:00:00.140) 0:01:03.160 *********** 2025-06-02 19:57:43.109722 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:57:43.109839 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 19:57:43.110670 | orchestrator | } 2025-06-02 19:57:43.111065 | orchestrator | 2025-06-02 19:57:43.111890 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 19:57:43.112134 | orchestrator | Monday 02 June 2025 19:57:43 +0000 (0:00:00.130) 0:01:03.290 *********** 2025-06-02 19:57:43.258927 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:57:43.261723 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 19:57:43.262582 | orchestrator | } 2025-06-02 19:57:43.263208 | orchestrator | 2025-06-02 19:57:43.264177 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 19:57:43.264503 | orchestrator | Monday 02 June 2025 19:57:43 +0000 (0:00:00.147) 0:01:03.438 *********** 2025-06-02 19:57:43.759033 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:57:43.759188 | orchestrator | 2025-06-02 19:57:43.759494 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 19:57:43.760206 | orchestrator | Monday 02 June 2025 19:57:43 +0000 (0:00:00.500) 0:01:03.938 *********** 2025-06-02 19:57:44.285484 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:57:44.286072 | orchestrator | 2025-06-02 19:57:44.286614 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 19:57:44.287244 | orchestrator | Monday 02 June 2025 19:57:44 +0000 (0:00:00.526) 0:01:04.465 *********** 2025-06-02 19:57:44.752494 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:57:44.753443 | orchestrator | 2025-06-02 19:57:44.754115 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 19:57:44.754823 | orchestrator | Monday 02 June 2025 19:57:44 +0000 (0:00:00.466) 0:01:04.931 *********** 2025-06-02 19:57:45.076850 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:57:45.076947 | orchestrator | 2025-06-02 19:57:45.077011 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 19:57:45.077425 | orchestrator | Monday 02 June 2025 19:57:45 +0000 (0:00:00.325) 0:01:05.257 *********** 2025-06-02 19:57:45.194866 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:45.195825 | orchestrator | 2025-06-02 19:57:45.196918 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 19:57:45.197783 | orchestrator | Monday 02 June 2025 19:57:45 +0000 (0:00:00.116) 0:01:05.374 *********** 2025-06-02 19:57:45.312267 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:45.313879 | orchestrator | 2025-06-02 19:57:45.315780 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 19:57:45.316218 | orchestrator | Monday 02 June 2025 19:57:45 +0000 (0:00:00.117) 0:01:05.491 *********** 2025-06-02 19:57:45.464133 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:57:45.464399 | orchestrator |  "vgs_report": { 2025-06-02 19:57:45.465382 | orchestrator |  "vg": [] 2025-06-02 19:57:45.465979 | orchestrator |  } 2025-06-02 19:57:45.466762 | orchestrator | } 2025-06-02 19:57:45.468505 | orchestrator | 2025-06-02 19:57:45.469155 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 19:57:45.469809 | orchestrator | Monday 02 June 2025 19:57:45 +0000 (0:00:00.152) 0:01:05.644 *********** 2025-06-02 19:57:45.590820 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:45.591243 | orchestrator | 2025-06-02 19:57:45.592502 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 19:57:45.593261 | orchestrator | Monday 02 June 2025 19:57:45 +0000 (0:00:00.126) 0:01:05.771 *********** 2025-06-02 19:57:45.727788 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:45.728770 | orchestrator | 2025-06-02 19:57:45.729653 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 19:57:45.731052 | orchestrator | Monday 02 June 2025 19:57:45 +0000 (0:00:00.137) 0:01:05.908 *********** 2025-06-02 19:57:45.868662 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:45.869479 | orchestrator | 2025-06-02 19:57:45.870640 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 19:57:45.871647 | orchestrator | Monday 02 June 2025 19:57:45 +0000 (0:00:00.139) 0:01:06.048 *********** 2025-06-02 19:57:46.015588 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:46.016440 | orchestrator | 2025-06-02 19:57:46.017276 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 19:57:46.018545 | orchestrator | Monday 02 June 2025 19:57:46 +0000 (0:00:00.146) 0:01:06.195 *********** 2025-06-02 19:57:46.146993 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:46.148099 | orchestrator | 2025-06-02 19:57:46.149628 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 19:57:46.150620 | orchestrator | Monday 02 June 2025 19:57:46 +0000 (0:00:00.131) 0:01:06.327 *********** 2025-06-02 19:57:46.270386 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:46.270908 | orchestrator | 2025-06-02 19:57:46.271903 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 19:57:46.272518 | orchestrator | Monday 02 June 2025 19:57:46 +0000 (0:00:00.123) 0:01:06.450 *********** 2025-06-02 19:57:46.406701 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:46.406788 | orchestrator | 2025-06-02 19:57:46.407254 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 19:57:46.408020 | orchestrator | Monday 02 June 2025 19:57:46 +0000 (0:00:00.136) 0:01:06.587 *********** 2025-06-02 19:57:46.545209 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:46.545427 | orchestrator | 2025-06-02 19:57:46.546508 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 19:57:46.547033 | orchestrator | Monday 02 June 2025 19:57:46 +0000 (0:00:00.137) 0:01:06.724 *********** 2025-06-02 19:57:46.866203 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:46.866343 | orchestrator | 2025-06-02 19:57:46.866881 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 19:57:46.867616 | orchestrator | Monday 02 June 2025 19:57:46 +0000 (0:00:00.322) 0:01:07.047 *********** 2025-06-02 19:57:46.993828 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:46.994503 | orchestrator | 2025-06-02 19:57:46.995920 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 19:57:46.996724 | orchestrator | Monday 02 June 2025 19:57:46 +0000 (0:00:00.127) 0:01:07.174 *********** 2025-06-02 19:57:47.131094 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:47.131180 | orchestrator | 2025-06-02 19:57:47.132345 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 19:57:47.133138 | orchestrator | Monday 02 June 2025 19:57:47 +0000 (0:00:00.136) 0:01:07.311 *********** 2025-06-02 19:57:47.259577 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:47.260077 | orchestrator | 2025-06-02 19:57:47.261101 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 19:57:47.262055 | orchestrator | Monday 02 June 2025 19:57:47 +0000 (0:00:00.129) 0:01:07.440 *********** 2025-06-02 19:57:47.408417 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:47.408595 | orchestrator | 2025-06-02 19:57:47.408613 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 19:57:47.408621 | orchestrator | Monday 02 June 2025 19:57:47 +0000 (0:00:00.148) 0:01:07.588 *********** 2025-06-02 19:57:47.536260 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:47.537417 | orchestrator | 2025-06-02 19:57:47.538533 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 19:57:47.539243 | orchestrator | Monday 02 June 2025 19:57:47 +0000 (0:00:00.126) 0:01:07.715 *********** 2025-06-02 19:57:47.684767 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:47.684850 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:47.684861 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:47.684869 | orchestrator | 2025-06-02 19:57:47.684875 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 19:57:47.684880 | orchestrator | Monday 02 June 2025 19:57:47 +0000 (0:00:00.148) 0:01:07.863 *********** 2025-06-02 19:57:47.842012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:47.842724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:47.843465 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:47.845336 | orchestrator | 2025-06-02 19:57:47.846290 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 19:57:47.846686 | orchestrator | Monday 02 June 2025 19:57:47 +0000 (0:00:00.158) 0:01:08.021 *********** 2025-06-02 19:57:48.011074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:48.011829 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:48.012842 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:48.013541 | orchestrator | 2025-06-02 19:57:48.015015 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 19:57:48.015048 | orchestrator | Monday 02 June 2025 19:57:48 +0000 (0:00:00.169) 0:01:08.191 *********** 2025-06-02 19:57:48.159790 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:48.159875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:48.160630 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:48.161975 | orchestrator | 2025-06-02 19:57:48.162290 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 19:57:48.162729 | orchestrator | Monday 02 June 2025 19:57:48 +0000 (0:00:00.149) 0:01:08.340 *********** 2025-06-02 19:57:48.317354 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:48.317516 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:48.318799 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:48.319605 | orchestrator | 2025-06-02 19:57:48.321343 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 19:57:48.321391 | orchestrator | Monday 02 June 2025 19:57:48 +0000 (0:00:00.156) 0:01:08.497 *********** 2025-06-02 19:57:48.466957 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:48.467948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:48.470290 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:48.470602 | orchestrator | 2025-06-02 19:57:48.471256 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 19:57:48.473792 | orchestrator | Monday 02 June 2025 19:57:48 +0000 (0:00:00.148) 0:01:08.646 *********** 2025-06-02 19:57:48.807287 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:48.808193 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:48.809628 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:48.810766 | orchestrator | 2025-06-02 19:57:48.812134 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 19:57:48.813068 | orchestrator | Monday 02 June 2025 19:57:48 +0000 (0:00:00.340) 0:01:08.987 *********** 2025-06-02 19:57:48.962463 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:48.964188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:48.965590 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:48.967658 | orchestrator | 2025-06-02 19:57:48.967910 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 19:57:48.968582 | orchestrator | Monday 02 June 2025 19:57:48 +0000 (0:00:00.155) 0:01:09.142 *********** 2025-06-02 19:57:49.461639 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:57:49.462054 | orchestrator | 2025-06-02 19:57:49.462781 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 19:57:49.463560 | orchestrator | Monday 02 June 2025 19:57:49 +0000 (0:00:00.499) 0:01:09.641 *********** 2025-06-02 19:57:50.005755 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:57:50.006124 | orchestrator | 2025-06-02 19:57:50.006987 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 19:57:50.008901 | orchestrator | Monday 02 June 2025 19:57:49 +0000 (0:00:00.543) 0:01:10.185 *********** 2025-06-02 19:57:50.145602 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:57:50.145829 | orchestrator | 2025-06-02 19:57:50.145863 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 19:57:50.146798 | orchestrator | Monday 02 June 2025 19:57:50 +0000 (0:00:00.140) 0:01:10.325 *********** 2025-06-02 19:57:50.318858 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'vg_name': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'}) 2025-06-02 19:57:50.320576 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'vg_name': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'}) 2025-06-02 19:57:50.321521 | orchestrator | 2025-06-02 19:57:50.322757 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 19:57:50.323813 | orchestrator | Monday 02 June 2025 19:57:50 +0000 (0:00:00.173) 0:01:10.499 *********** 2025-06-02 19:57:50.486800 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:50.488237 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:50.488821 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:50.489634 | orchestrator | 2025-06-02 19:57:50.492134 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 19:57:50.492884 | orchestrator | Monday 02 June 2025 19:57:50 +0000 (0:00:00.167) 0:01:10.666 *********** 2025-06-02 19:57:50.634701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:50.634810 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:50.635430 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:50.636663 | orchestrator | 2025-06-02 19:57:50.638049 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 19:57:50.638489 | orchestrator | Monday 02 June 2025 19:57:50 +0000 (0:00:00.146) 0:01:10.813 *********** 2025-06-02 19:57:50.783733 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'})  2025-06-02 19:57:50.784262 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'})  2025-06-02 19:57:50.784891 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:50.785392 | orchestrator | 2025-06-02 19:57:50.785854 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 19:57:50.786356 | orchestrator | Monday 02 June 2025 19:57:50 +0000 (0:00:00.151) 0:01:10.964 *********** 2025-06-02 19:57:50.912925 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:57:50.913209 | orchestrator |  "lvm_report": { 2025-06-02 19:57:50.915949 | orchestrator |  "lv": [ 2025-06-02 19:57:50.916592 | orchestrator |  { 2025-06-02 19:57:50.917061 | orchestrator |  "lv_name": "osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf", 2025-06-02 19:57:50.917669 | orchestrator |  "vg_name": "ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf" 2025-06-02 19:57:50.918110 | orchestrator |  }, 2025-06-02 19:57:50.919159 | orchestrator |  { 2025-06-02 19:57:50.919180 | orchestrator |  "lv_name": "osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1", 2025-06-02 19:57:50.919989 | orchestrator |  "vg_name": "ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1" 2025-06-02 19:57:50.920494 | orchestrator |  } 2025-06-02 19:57:50.920675 | orchestrator |  ], 2025-06-02 19:57:50.921353 | orchestrator |  "pv": [ 2025-06-02 19:57:50.921819 | orchestrator |  { 2025-06-02 19:57:50.922139 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 19:57:50.923422 | orchestrator |  "vg_name": "ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf" 2025-06-02 19:57:50.924166 | orchestrator |  }, 2025-06-02 19:57:50.925022 | orchestrator |  { 2025-06-02 19:57:50.925791 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 19:57:50.926261 | orchestrator |  "vg_name": "ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1" 2025-06-02 19:57:50.926740 | orchestrator |  } 2025-06-02 19:57:50.927432 | orchestrator |  ] 2025-06-02 19:57:50.927839 | orchestrator |  } 2025-06-02 19:57:50.928461 | orchestrator | } 2025-06-02 19:57:50.929654 | orchestrator | 2025-06-02 19:57:50.930175 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:57:50.930945 | orchestrator | 2025-06-02 19:57:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:57:50.931475 | orchestrator | 2025-06-02 19:57:50 | INFO  | Please wait and do not abort execution. 2025-06-02 19:57:50.932597 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 19:57:50.933294 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 19:57:50.934352 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 19:57:50.934782 | orchestrator | 2025-06-02 19:57:50.935281 | orchestrator | 2025-06-02 19:57:50.935857 | orchestrator | 2025-06-02 19:57:50.936261 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:57:50.936955 | orchestrator | Monday 02 June 2025 19:57:50 +0000 (0:00:00.128) 0:01:11.093 *********** 2025-06-02 19:57:50.937478 | orchestrator | =============================================================================== 2025-06-02 19:57:50.938288 | orchestrator | Create block VGs -------------------------------------------------------- 5.58s 2025-06-02 19:57:50.938454 | orchestrator | Create block LVs -------------------------------------------------------- 3.93s 2025-06-02 19:57:50.938958 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.93s 2025-06-02 19:57:50.939604 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2025-06-02 19:57:50.940155 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2025-06-02 19:57:50.940658 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.49s 2025-06-02 19:57:50.941125 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.46s 2025-06-02 19:57:50.941554 | orchestrator | Add known partitions to the list of available block devices ------------- 1.44s 2025-06-02 19:57:50.942174 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2025-06-02 19:57:50.942437 | orchestrator | Print LVM report data --------------------------------------------------- 0.92s 2025-06-02 19:57:50.942962 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2025-06-02 19:57:50.943475 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2025-06-02 19:57:50.944177 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2025-06-02 19:57:50.944584 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.74s 2025-06-02 19:57:50.944976 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-06-02 19:57:50.945563 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.70s 2025-06-02 19:57:50.945886 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-06-02 19:57:50.946286 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-06-02 19:57:50.946769 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.68s 2025-06-02 19:57:50.947208 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-06-02 19:57:53.193349 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:57:53.193453 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:57:53.193468 | orchestrator | Registering Redlock._release_script 2025-06-02 19:57:53.249685 | orchestrator | 2025-06-02 19:57:53 | INFO  | Task 832b2393-ae2c-4e28-bdc9-b03d723846e4 (facts) was prepared for execution. 2025-06-02 19:57:53.249822 | orchestrator | 2025-06-02 19:57:53 | INFO  | It takes a moment until task 832b2393-ae2c-4e28-bdc9-b03d723846e4 (facts) has been started and output is visible here. 2025-06-02 19:57:56.955865 | orchestrator | 2025-06-02 19:57:56.956580 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 19:57:56.956614 | orchestrator | 2025-06-02 19:57:56.956691 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 19:57:56.957008 | orchestrator | Monday 02 June 2025 19:57:56 +0000 (0:00:00.201) 0:00:00.201 *********** 2025-06-02 19:57:57.878119 | orchestrator | ok: [testbed-manager] 2025-06-02 19:57:57.878432 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:57:57.880739 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:57:57.880871 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:57:57.880969 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:57:57.881976 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:57:57.882434 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:57:57.883006 | orchestrator | 2025-06-02 19:57:57.883821 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 19:57:57.884285 | orchestrator | Monday 02 June 2025 19:57:57 +0000 (0:00:00.923) 0:00:01.124 *********** 2025-06-02 19:57:58.022907 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:57:58.096340 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:57:58.168020 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:57:58.249757 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:57:58.320072 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:57:58.972579 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:57:58.973131 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:57:58.976626 | orchestrator | 2025-06-02 19:57:58.976710 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 19:57:58.976726 | orchestrator | 2025-06-02 19:57:58.977191 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:57:58.977904 | orchestrator | Monday 02 June 2025 19:57:58 +0000 (0:00:01.097) 0:00:02.222 *********** 2025-06-02 19:58:03.933743 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:58:03.934197 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:58:03.935225 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:58:03.939376 | orchestrator | ok: [testbed-manager] 2025-06-02 19:58:03.939447 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:58:03.939456 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:58:03.939465 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:58:03.939723 | orchestrator | 2025-06-02 19:58:03.940857 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 19:58:03.941782 | orchestrator | 2025-06-02 19:58:03.942700 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 19:58:03.943544 | orchestrator | Monday 02 June 2025 19:58:03 +0000 (0:00:04.960) 0:00:07.182 *********** 2025-06-02 19:58:04.095629 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:58:04.174095 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:58:04.251590 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:58:04.332084 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:58:04.436618 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:58:04.480747 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:58:04.481746 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:58:04.482989 | orchestrator | 2025-06-02 19:58:04.483811 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:58:04.484235 | orchestrator | 2025-06-02 19:58:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:58:04.484366 | orchestrator | 2025-06-02 19:58:04 | INFO  | Please wait and do not abort execution. 2025-06-02 19:58:04.485161 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:58:04.485733 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:58:04.486157 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:58:04.486857 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:58:04.487097 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:58:04.488748 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:58:04.489189 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:58:04.492778 | orchestrator | 2025-06-02 19:58:04.492830 | orchestrator | 2025-06-02 19:58:04.492861 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:58:04.492874 | orchestrator | Monday 02 June 2025 19:58:04 +0000 (0:00:00.548) 0:00:07.730 *********** 2025-06-02 19:58:04.492885 | orchestrator | =============================================================================== 2025-06-02 19:58:04.492910 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.96s 2025-06-02 19:58:04.492922 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2025-06-02 19:58:04.493000 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.92s 2025-06-02 19:58:04.493422 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-06-02 19:58:05.068475 | orchestrator | 2025-06-02 19:58:05.070000 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Jun 2 19:58:05 UTC 2025 2025-06-02 19:58:05.070108 | orchestrator | 2025-06-02 19:58:06.724710 | orchestrator | 2025-06-02 19:58:06 | INFO  | Collection nutshell is prepared for execution 2025-06-02 19:58:06.724784 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [0] - dotfiles 2025-06-02 19:58:06.729969 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:58:06.730076 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:58:06.730095 | orchestrator | Registering Redlock._release_script 2025-06-02 19:58:06.734693 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [0] - homer 2025-06-02 19:58:06.734833 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [0] - netdata 2025-06-02 19:58:06.734850 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [0] - openstackclient 2025-06-02 19:58:06.734887 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [0] - phpmyadmin 2025-06-02 19:58:06.734900 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [0] - common 2025-06-02 19:58:06.736791 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [1] -- loadbalancer 2025-06-02 19:58:06.736863 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [2] --- opensearch 2025-06-02 19:58:06.736885 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [2] --- mariadb-ng 2025-06-02 19:58:06.736999 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [3] ---- horizon 2025-06-02 19:58:06.737206 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [3] ---- keystone 2025-06-02 19:58:06.737319 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [4] ----- neutron 2025-06-02 19:58:06.738165 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [5] ------ wait-for-nova 2025-06-02 19:58:06.738186 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [5] ------ octavia 2025-06-02 19:58:06.738329 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [4] ----- barbican 2025-06-02 19:58:06.738422 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [4] ----- designate 2025-06-02 19:58:06.738911 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [4] ----- ironic 2025-06-02 19:58:06.738926 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [4] ----- placement 2025-06-02 19:58:06.740116 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [4] ----- magnum 2025-06-02 19:58:06.740163 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [1] -- openvswitch 2025-06-02 19:58:06.740188 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [2] --- ovn 2025-06-02 19:58:06.740197 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [1] -- memcached 2025-06-02 19:58:06.740207 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [1] -- redis 2025-06-02 19:58:06.740215 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [1] -- rabbitmq-ng 2025-06-02 19:58:06.740285 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [0] - kubernetes 2025-06-02 19:58:06.741908 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [1] -- kubeconfig 2025-06-02 19:58:06.741930 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [1] -- copy-kubeconfig 2025-06-02 19:58:06.742108 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [0] - ceph 2025-06-02 19:58:06.743589 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [1] -- ceph-pools 2025-06-02 19:58:06.744229 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [2] --- copy-ceph-keys 2025-06-02 19:58:06.744244 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [3] ---- cephclient 2025-06-02 19:58:06.744249 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-02 19:58:06.744254 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [4] ----- wait-for-keystone 2025-06-02 19:58:06.744259 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-02 19:58:06.744264 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [5] ------ glance 2025-06-02 19:58:06.744626 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [5] ------ cinder 2025-06-02 19:58:06.744635 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [5] ------ nova 2025-06-02 19:58:06.744800 | orchestrator | 2025-06-02 19:58:06 | INFO  | A [4] ----- prometheus 2025-06-02 19:58:06.744808 | orchestrator | 2025-06-02 19:58:06 | INFO  | D [5] ------ grafana 2025-06-02 19:58:06.926813 | orchestrator | 2025-06-02 19:58:06 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-02 19:58:06.926904 | orchestrator | 2025-06-02 19:58:06 | INFO  | Tasks are running in the background 2025-06-02 19:58:09.422903 | orchestrator | 2025-06-02 19:58:09 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-02 19:58:11.563634 | orchestrator | 2025-06-02 19:58:11 | INFO  | Task ea8500f3-e433-4de1-af7d-3a79980896b4 is in state STARTED 2025-06-02 19:58:11.563840 | orchestrator | 2025-06-02 19:58:11 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:11.563864 | orchestrator | 2025-06-02 19:58:11 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:11.564381 | orchestrator | 2025-06-02 19:58:11 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:11.565949 | orchestrator | 2025-06-02 19:58:11 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:11.566424 | orchestrator | 2025-06-02 19:58:11 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:11.566889 | orchestrator | 2025-06-02 19:58:11 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:11.566919 | orchestrator | 2025-06-02 19:58:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:14.614645 | orchestrator | 2025-06-02 19:58:14 | INFO  | Task ea8500f3-e433-4de1-af7d-3a79980896b4 is in state STARTED 2025-06-02 19:58:14.617911 | orchestrator | 2025-06-02 19:58:14 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:14.618389 | orchestrator | 2025-06-02 19:58:14 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:14.618796 | orchestrator | 2025-06-02 19:58:14 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:14.620253 | orchestrator | 2025-06-02 19:58:14 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:14.620769 | orchestrator | 2025-06-02 19:58:14 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:14.623650 | orchestrator | 2025-06-02 19:58:14 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:14.623682 | orchestrator | 2025-06-02 19:58:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:17.657522 | orchestrator | 2025-06-02 19:58:17 | INFO  | Task ea8500f3-e433-4de1-af7d-3a79980896b4 is in state STARTED 2025-06-02 19:58:17.658129 | orchestrator | 2025-06-02 19:58:17 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:17.660101 | orchestrator | 2025-06-02 19:58:17 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:17.662879 | orchestrator | 2025-06-02 19:58:17 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:17.663039 | orchestrator | 2025-06-02 19:58:17 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:17.665864 | orchestrator | 2025-06-02 19:58:17 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:17.666227 | orchestrator | 2025-06-02 19:58:17 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:17.666328 | orchestrator | 2025-06-02 19:58:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:20.719990 | orchestrator | 2025-06-02 19:58:20 | INFO  | Task ea8500f3-e433-4de1-af7d-3a79980896b4 is in state STARTED 2025-06-02 19:58:20.728607 | orchestrator | 2025-06-02 19:58:20 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:20.738662 | orchestrator | 2025-06-02 19:58:20 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:20.740098 | orchestrator | 2025-06-02 19:58:20 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:20.754629 | orchestrator | 2025-06-02 19:58:20 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:20.754699 | orchestrator | 2025-06-02 19:58:20 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:20.763609 | orchestrator | 2025-06-02 19:58:20 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:20.763669 | orchestrator | 2025-06-02 19:58:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:23.817751 | orchestrator | 2025-06-02 19:58:23 | INFO  | Task ea8500f3-e433-4de1-af7d-3a79980896b4 is in state STARTED 2025-06-02 19:58:23.819924 | orchestrator | 2025-06-02 19:58:23 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:23.821571 | orchestrator | 2025-06-02 19:58:23 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:23.825656 | orchestrator | 2025-06-02 19:58:23 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:23.827728 | orchestrator | 2025-06-02 19:58:23 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:23.828984 | orchestrator | 2025-06-02 19:58:23 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:23.829184 | orchestrator | 2025-06-02 19:58:23 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:23.829885 | orchestrator | 2025-06-02 19:58:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:26.895514 | orchestrator | 2025-06-02 19:58:26 | INFO  | Task ea8500f3-e433-4de1-af7d-3a79980896b4 is in state STARTED 2025-06-02 19:58:26.898897 | orchestrator | 2025-06-02 19:58:26 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:26.904648 | orchestrator | 2025-06-02 19:58:26 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:26.911464 | orchestrator | 2025-06-02 19:58:26 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:26.914144 | orchestrator | 2025-06-02 19:58:26 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:26.920636 | orchestrator | 2025-06-02 19:58:26 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:26.924165 | orchestrator | 2025-06-02 19:58:26 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:26.924231 | orchestrator | 2025-06-02 19:58:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:29.999024 | orchestrator | 2025-06-02 19:58:29 | INFO  | Task ea8500f3-e433-4de1-af7d-3a79980896b4 is in state STARTED 2025-06-02 19:58:30.005178 | orchestrator | 2025-06-02 19:58:30 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:30.013419 | orchestrator | 2025-06-02 19:58:30 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:30.013484 | orchestrator | 2025-06-02 19:58:30 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:30.023749 | orchestrator | 2025-06-02 19:58:30 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:30.029742 | orchestrator | 2025-06-02 19:58:30 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:30.029780 | orchestrator | 2025-06-02 19:58:30 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:30.029793 | orchestrator | 2025-06-02 19:58:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:33.088425 | orchestrator | 2025-06-02 19:58:33 | INFO  | Task ea8500f3-e433-4de1-af7d-3a79980896b4 is in state STARTED 2025-06-02 19:58:33.088691 | orchestrator | 2025-06-02 19:58:33 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:33.091216 | orchestrator | 2025-06-02 19:58:33 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:33.091249 | orchestrator | 2025-06-02 19:58:33 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:33.091262 | orchestrator | 2025-06-02 19:58:33 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:33.093695 | orchestrator | 2025-06-02 19:58:33 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:33.094477 | orchestrator | 2025-06-02 19:58:33 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:33.094505 | orchestrator | 2025-06-02 19:58:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:36.156388 | orchestrator | 2025-06-02 19:58:36.156483 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-02 19:58:36.156493 | orchestrator | 2025-06-02 19:58:36.156499 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-02 19:58:36.156507 | orchestrator | Monday 02 June 2025 19:58:18 +0000 (0:00:00.715) 0:00:00.715 *********** 2025-06-02 19:58:36.156513 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:36.156522 | orchestrator | changed: [testbed-manager] 2025-06-02 19:58:36.156529 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:36.156535 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:36.156542 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:58:36.156548 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:58:36.156555 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:58:36.156561 | orchestrator | 2025-06-02 19:58:36.156568 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-02 19:58:36.156575 | orchestrator | Monday 02 June 2025 19:58:23 +0000 (0:00:04.680) 0:00:05.395 *********** 2025-06-02 19:58:36.156583 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 19:58:36.156590 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 19:58:36.156597 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 19:58:36.156604 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 19:58:36.156611 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 19:58:36.156617 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 19:58:36.156624 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 19:58:36.156632 | orchestrator | 2025-06-02 19:58:36.156639 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-02 19:58:36.156648 | orchestrator | Monday 02 June 2025 19:58:25 +0000 (0:00:02.172) 0:00:07.568 *********** 2025-06-02 19:58:36.156668 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:58:24.265137', 'end': '2025-06-02 19:58:24.273120', 'delta': '0:00:00.007983', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:58:36.156679 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:58:24.255068', 'end': '2025-06-02 19:58:24.262195', 'delta': '0:00:00.007127', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:58:36.156709 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:58:24.263660', 'end': '2025-06-02 19:58:24.273871', 'delta': '0:00:00.010211', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:58:36.156739 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:58:24.542946', 'end': '2025-06-02 19:58:24.547974', 'delta': '0:00:00.005028', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:58:36.156748 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:58:24.797027', 'end': '2025-06-02 19:58:24.805995', 'delta': '0:00:00.008968', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:58:36.156758 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:58:25.010732', 'end': '2025-06-02 19:58:25.020522', 'delta': '0:00:00.009790', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:58:36.156776 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:58:25.327046', 'end': '2025-06-02 19:58:25.335781', 'delta': '0:00:00.008735', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:58:36.156782 | orchestrator | 2025-06-02 19:58:36.156789 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-02 19:58:36.156796 | orchestrator | Monday 02 June 2025 19:58:28 +0000 (0:00:02.886) 0:00:10.454 *********** 2025-06-02 19:58:36.156803 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 19:58:36.156809 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 19:58:36.156816 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 19:58:36.156822 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 19:58:36.156829 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 19:58:36.156835 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 19:58:36.156842 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 19:58:36.156849 | orchestrator | 2025-06-02 19:58:36.156855 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-02 19:58:36.156863 | orchestrator | Monday 02 June 2025 19:58:30 +0000 (0:00:02.206) 0:00:12.661 *********** 2025-06-02 19:58:36.156874 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-02 19:58:36.156885 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 19:58:36.156895 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 19:58:36.156906 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 19:58:36.156915 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 19:58:36.156925 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 19:58:36.156936 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 19:58:36.156947 | orchestrator | 2025-06-02 19:58:36.156956 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:58:36.156974 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:36.156986 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:36.156997 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:36.157005 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:36.157013 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:36.157024 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:36.157034 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:36.157043 | orchestrator | 2025-06-02 19:58:36.157061 | orchestrator | 2025-06-02 19:58:36.157071 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:58:36.157081 | orchestrator | Monday 02 June 2025 19:58:34 +0000 (0:00:03.948) 0:00:16.610 *********** 2025-06-02 19:58:36.157091 | orchestrator | =============================================================================== 2025-06-02 19:58:36.157102 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.68s 2025-06-02 19:58:36.157111 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.95s 2025-06-02 19:58:36.157122 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.89s 2025-06-02 19:58:36.157130 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.21s 2025-06-02 19:58:36.157141 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.17s 2025-06-02 19:58:36.157182 | orchestrator | 2025-06-02 19:58:36 | INFO  | Task ea8500f3-e433-4de1-af7d-3a79980896b4 is in state SUCCESS 2025-06-02 19:58:36.159215 | orchestrator | 2025-06-02 19:58:36 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:58:36.161323 | orchestrator | 2025-06-02 19:58:36 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:36.165724 | orchestrator | 2025-06-02 19:58:36 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:36.167079 | orchestrator | 2025-06-02 19:58:36 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:36.167962 | orchestrator | 2025-06-02 19:58:36 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:36.171263 | orchestrator | 2025-06-02 19:58:36 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:36.171393 | orchestrator | 2025-06-02 19:58:36 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:36.171416 | orchestrator | 2025-06-02 19:58:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:39.202965 | orchestrator | 2025-06-02 19:58:39 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:58:39.204621 | orchestrator | 2025-06-02 19:58:39 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:39.205893 | orchestrator | 2025-06-02 19:58:39 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:39.206602 | orchestrator | 2025-06-02 19:58:39 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:39.208156 | orchestrator | 2025-06-02 19:58:39 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:39.208734 | orchestrator | 2025-06-02 19:58:39 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:39.211598 | orchestrator | 2025-06-02 19:58:39 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:39.211714 | orchestrator | 2025-06-02 19:58:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:42.256434 | orchestrator | 2025-06-02 19:58:42 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:58:42.259174 | orchestrator | 2025-06-02 19:58:42 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:42.259265 | orchestrator | 2025-06-02 19:58:42 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:42.260202 | orchestrator | 2025-06-02 19:58:42 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:42.265779 | orchestrator | 2025-06-02 19:58:42 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:42.269531 | orchestrator | 2025-06-02 19:58:42 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:42.272518 | orchestrator | 2025-06-02 19:58:42 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:42.273164 | orchestrator | 2025-06-02 19:58:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:45.313385 | orchestrator | 2025-06-02 19:58:45 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:58:45.314733 | orchestrator | 2025-06-02 19:58:45 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:45.314816 | orchestrator | 2025-06-02 19:58:45 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:45.314936 | orchestrator | 2025-06-02 19:58:45 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:45.317236 | orchestrator | 2025-06-02 19:58:45 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:45.320996 | orchestrator | 2025-06-02 19:58:45 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:45.326639 | orchestrator | 2025-06-02 19:58:45 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:45.326708 | orchestrator | 2025-06-02 19:58:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:48.386881 | orchestrator | 2025-06-02 19:58:48 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:58:48.387007 | orchestrator | 2025-06-02 19:58:48 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:48.389217 | orchestrator | 2025-06-02 19:58:48 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:48.390897 | orchestrator | 2025-06-02 19:58:48 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:48.393383 | orchestrator | 2025-06-02 19:58:48 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:48.396944 | orchestrator | 2025-06-02 19:58:48 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:48.398212 | orchestrator | 2025-06-02 19:58:48 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:48.398270 | orchestrator | 2025-06-02 19:58:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:51.442214 | orchestrator | 2025-06-02 19:58:51 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:58:51.442324 | orchestrator | 2025-06-02 19:58:51 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:51.445037 | orchestrator | 2025-06-02 19:58:51 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:51.445612 | orchestrator | 2025-06-02 19:58:51 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:51.447216 | orchestrator | 2025-06-02 19:58:51 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:51.447706 | orchestrator | 2025-06-02 19:58:51 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:51.449343 | orchestrator | 2025-06-02 19:58:51 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:51.449383 | orchestrator | 2025-06-02 19:58:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:54.508591 | orchestrator | 2025-06-02 19:58:54 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:58:54.514950 | orchestrator | 2025-06-02 19:58:54 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state STARTED 2025-06-02 19:58:54.517056 | orchestrator | 2025-06-02 19:58:54 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:54.526928 | orchestrator | 2025-06-02 19:58:54 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:54.527748 | orchestrator | 2025-06-02 19:58:54 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:54.533232 | orchestrator | 2025-06-02 19:58:54 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:54.543675 | orchestrator | 2025-06-02 19:58:54 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:54.543745 | orchestrator | 2025-06-02 19:58:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:57.616777 | orchestrator | 2025-06-02 19:58:57 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:58:57.616859 | orchestrator | 2025-06-02 19:58:57 | INFO  | Task d998d2ef-eeb9-4240-b4b0-66fe1898bba1 is in state SUCCESS 2025-06-02 19:58:57.616899 | orchestrator | 2025-06-02 19:58:57 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:58:57.617693 | orchestrator | 2025-06-02 19:58:57 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:58:57.620109 | orchestrator | 2025-06-02 19:58:57 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:58:57.622157 | orchestrator | 2025-06-02 19:58:57 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:58:57.622983 | orchestrator | 2025-06-02 19:58:57 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:58:57.623020 | orchestrator | 2025-06-02 19:58:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:00.664246 | orchestrator | 2025-06-02 19:59:00 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:00.664413 | orchestrator | 2025-06-02 19:59:00 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state STARTED 2025-06-02 19:59:00.664425 | orchestrator | 2025-06-02 19:59:00 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:59:00.664488 | orchestrator | 2025-06-02 19:59:00 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:00.664862 | orchestrator | 2025-06-02 19:59:00 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:00.665409 | orchestrator | 2025-06-02 19:59:00 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:00.665471 | orchestrator | 2025-06-02 19:59:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:03.703932 | orchestrator | 2025-06-02 19:59:03 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:03.705445 | orchestrator | 2025-06-02 19:59:03 | INFO  | Task ab232f1e-2c5f-49fc-ac27-eefd9d479d6b is in state SUCCESS 2025-06-02 19:59:03.706058 | orchestrator | 2025-06-02 19:59:03 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:59:03.706586 | orchestrator | 2025-06-02 19:59:03 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:03.709455 | orchestrator | 2025-06-02 19:59:03 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:03.709676 | orchestrator | 2025-06-02 19:59:03 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:03.709751 | orchestrator | 2025-06-02 19:59:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:06.758805 | orchestrator | 2025-06-02 19:59:06 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:06.758896 | orchestrator | 2025-06-02 19:59:06 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:59:06.758907 | orchestrator | 2025-06-02 19:59:06 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:06.759027 | orchestrator | 2025-06-02 19:59:06 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:06.761742 | orchestrator | 2025-06-02 19:59:06 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:06.761814 | orchestrator | 2025-06-02 19:59:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:09.827132 | orchestrator | 2025-06-02 19:59:09 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:09.827230 | orchestrator | 2025-06-02 19:59:09 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:59:09.827243 | orchestrator | 2025-06-02 19:59:09 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:09.827252 | orchestrator | 2025-06-02 19:59:09 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:09.827259 | orchestrator | 2025-06-02 19:59:09 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:09.827267 | orchestrator | 2025-06-02 19:59:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:12.858632 | orchestrator | 2025-06-02 19:59:12 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:12.858718 | orchestrator | 2025-06-02 19:59:12 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:59:12.860535 | orchestrator | 2025-06-02 19:59:12 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:12.860904 | orchestrator | 2025-06-02 19:59:12 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:12.861787 | orchestrator | 2025-06-02 19:59:12 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:12.861817 | orchestrator | 2025-06-02 19:59:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:15.927710 | orchestrator | 2025-06-02 19:59:15 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:15.927786 | orchestrator | 2025-06-02 19:59:15 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:59:15.927792 | orchestrator | 2025-06-02 19:59:15 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:15.927797 | orchestrator | 2025-06-02 19:59:15 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:15.929905 | orchestrator | 2025-06-02 19:59:15 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:15.929960 | orchestrator | 2025-06-02 19:59:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:18.990005 | orchestrator | 2025-06-02 19:59:18 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:18.990550 | orchestrator | 2025-06-02 19:59:18 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state STARTED 2025-06-02 19:59:18.993311 | orchestrator | 2025-06-02 19:59:18 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:18.994120 | orchestrator | 2025-06-02 19:59:18 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:18.995365 | orchestrator | 2025-06-02 19:59:18 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:18.999904 | orchestrator | 2025-06-02 19:59:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:22.039189 | orchestrator | 2025-06-02 19:59:22 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:22.040825 | orchestrator | 2025-06-02 19:59:22 | INFO  | Task 7e1f107f-8599-4cb1-9920-1fe9f2dfb6c9 is in state SUCCESS 2025-06-02 19:59:22.043394 | orchestrator | 2025-06-02 19:59:22.043454 | orchestrator | 2025-06-02 19:59:22.043462 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-02 19:59:22.043468 | orchestrator | 2025-06-02 19:59:22.043473 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-02 19:59:22.043478 | orchestrator | Monday 02 June 2025 19:58:18 +0000 (0:00:00.994) 0:00:00.994 *********** 2025-06-02 19:59:22.043483 | orchestrator | ok: [testbed-manager] => { 2025-06-02 19:59:22.043492 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-02 19:59:22.043503 | orchestrator | } 2025-06-02 19:59:22.043511 | orchestrator | 2025-06-02 19:59:22.043518 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-02 19:59:22.043524 | orchestrator | Monday 02 June 2025 19:58:19 +0000 (0:00:00.495) 0:00:01.489 *********** 2025-06-02 19:59:22.043531 | orchestrator | ok: [testbed-manager] 2025-06-02 19:59:22.043538 | orchestrator | 2025-06-02 19:59:22.043544 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-02 19:59:22.043551 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:02.148) 0:00:03.638 *********** 2025-06-02 19:59:22.043558 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-02 19:59:22.043564 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-02 19:59:22.043570 | orchestrator | 2025-06-02 19:59:22.043576 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-02 19:59:22.043582 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:01.080) 0:00:04.719 *********** 2025-06-02 19:59:22.043588 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.043595 | orchestrator | 2025-06-02 19:59:22.043601 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-02 19:59:22.043608 | orchestrator | Monday 02 June 2025 19:58:25 +0000 (0:00:03.015) 0:00:07.734 *********** 2025-06-02 19:59:22.043614 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.043620 | orchestrator | 2025-06-02 19:59:22.043626 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-02 19:59:22.043633 | orchestrator | Monday 02 June 2025 19:58:27 +0000 (0:00:02.028) 0:00:09.763 *********** 2025-06-02 19:59:22.043641 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-02 19:59:22.043648 | orchestrator | ok: [testbed-manager] 2025-06-02 19:59:22.043655 | orchestrator | 2025-06-02 19:59:22.043661 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-02 19:59:22.043668 | orchestrator | Monday 02 June 2025 19:58:51 +0000 (0:00:24.256) 0:00:34.019 *********** 2025-06-02 19:59:22.043674 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.043681 | orchestrator | 2025-06-02 19:59:22.043687 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:59:22.043693 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:59:22.043701 | orchestrator | 2025-06-02 19:59:22.043708 | orchestrator | 2025-06-02 19:59:22.043714 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:59:22.043718 | orchestrator | Monday 02 June 2025 19:58:53 +0000 (0:00:01.832) 0:00:35.852 *********** 2025-06-02 19:59:22.043737 | orchestrator | =============================================================================== 2025-06-02 19:59:22.043743 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.26s 2025-06-02 19:59:22.043750 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.02s 2025-06-02 19:59:22.043755 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.15s 2025-06-02 19:59:22.043761 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.03s 2025-06-02 19:59:22.043768 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.83s 2025-06-02 19:59:22.043774 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.08s 2025-06-02 19:59:22.043781 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.50s 2025-06-02 19:59:22.043787 | orchestrator | 2025-06-02 19:59:22.043793 | orchestrator | 2025-06-02 19:59:22.043800 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-02 19:59:22.043807 | orchestrator | 2025-06-02 19:59:22.043819 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-02 19:59:22.043825 | orchestrator | Monday 02 June 2025 19:58:17 +0000 (0:00:00.465) 0:00:00.465 *********** 2025-06-02 19:59:22.043832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-02 19:59:22.043840 | orchestrator | 2025-06-02 19:59:22.043846 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-02 19:59:22.043852 | orchestrator | Monday 02 June 2025 19:58:17 +0000 (0:00:00.577) 0:00:01.043 *********** 2025-06-02 19:59:22.043859 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-02 19:59:22.043865 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-02 19:59:22.043872 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-02 19:59:22.043878 | orchestrator | 2025-06-02 19:59:22.043884 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-02 19:59:22.043891 | orchestrator | Monday 02 June 2025 19:58:19 +0000 (0:00:01.708) 0:00:02.751 *********** 2025-06-02 19:59:22.043897 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.043903 | orchestrator | 2025-06-02 19:59:22.043909 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-02 19:59:22.043917 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:02.074) 0:00:04.826 *********** 2025-06-02 19:59:22.043933 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-02 19:59:22.043937 | orchestrator | ok: [testbed-manager] 2025-06-02 19:59:22.043941 | orchestrator | 2025-06-02 19:59:22.043945 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-02 19:59:22.043948 | orchestrator | Monday 02 June 2025 19:58:56 +0000 (0:00:34.638) 0:00:39.465 *********** 2025-06-02 19:59:22.043952 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.043956 | orchestrator | 2025-06-02 19:59:22.043959 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-02 19:59:22.043963 | orchestrator | Monday 02 June 2025 19:58:57 +0000 (0:00:00.867) 0:00:40.332 *********** 2025-06-02 19:59:22.043967 | orchestrator | ok: [testbed-manager] 2025-06-02 19:59:22.043971 | orchestrator | 2025-06-02 19:59:22.043974 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-02 19:59:22.043978 | orchestrator | Monday 02 June 2025 19:58:57 +0000 (0:00:00.652) 0:00:40.984 *********** 2025-06-02 19:59:22.043982 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.043985 | orchestrator | 2025-06-02 19:59:22.043989 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-02 19:59:22.043993 | orchestrator | Monday 02 June 2025 19:58:59 +0000 (0:00:01.612) 0:00:42.597 *********** 2025-06-02 19:59:22.044002 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.044006 | orchestrator | 2025-06-02 19:59:22.044011 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-02 19:59:22.044016 | orchestrator | Monday 02 June 2025 19:59:00 +0000 (0:00:00.633) 0:00:43.230 *********** 2025-06-02 19:59:22.044020 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.044024 | orchestrator | 2025-06-02 19:59:22.044029 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-02 19:59:22.044033 | orchestrator | Monday 02 June 2025 19:59:00 +0000 (0:00:00.539) 0:00:43.770 *********** 2025-06-02 19:59:22.044038 | orchestrator | ok: [testbed-manager] 2025-06-02 19:59:22.044042 | orchestrator | 2025-06-02 19:59:22.044047 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:59:22.044051 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:59:22.044056 | orchestrator | 2025-06-02 19:59:22.044060 | orchestrator | 2025-06-02 19:59:22.044064 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:59:22.044069 | orchestrator | Monday 02 June 2025 19:59:01 +0000 (0:00:00.478) 0:00:44.249 *********** 2025-06-02 19:59:22.044073 | orchestrator | =============================================================================== 2025-06-02 19:59:22.044077 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.64s 2025-06-02 19:59:22.044082 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.07s 2025-06-02 19:59:22.044086 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.71s 2025-06-02 19:59:22.044091 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.61s 2025-06-02 19:59:22.044096 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.87s 2025-06-02 19:59:22.044103 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.65s 2025-06-02 19:59:22.044109 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.63s 2025-06-02 19:59:22.044115 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.58s 2025-06-02 19:59:22.044122 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.54s 2025-06-02 19:59:22.044128 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.48s 2025-06-02 19:59:22.044134 | orchestrator | 2025-06-02 19:59:22.044140 | orchestrator | 2025-06-02 19:59:22.044146 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 19:59:22.044152 | orchestrator | 2025-06-02 19:59:22.044158 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 19:59:22.044164 | orchestrator | Monday 02 June 2025 19:58:18 +0000 (0:00:00.523) 0:00:00.523 *********** 2025-06-02 19:59:22.044170 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-02 19:59:22.044176 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-02 19:59:22.044186 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-02 19:59:22.044193 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-02 19:59:22.044200 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-02 19:59:22.044206 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-02 19:59:22.044213 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-02 19:59:22.044220 | orchestrator | 2025-06-02 19:59:22.044226 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-02 19:59:22.044233 | orchestrator | 2025-06-02 19:59:22.044239 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-02 19:59:22.044246 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:02.607) 0:00:03.130 *********** 2025-06-02 19:59:22.044259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:59:22.044311 | orchestrator | 2025-06-02 19:59:22.044316 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-02 19:59:22.044321 | orchestrator | Monday 02 June 2025 19:58:23 +0000 (0:00:02.399) 0:00:05.530 *********** 2025-06-02 19:59:22.044325 | orchestrator | ok: [testbed-manager] 2025-06-02 19:59:22.044330 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:59:22.044334 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:59:22.044339 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:59:22.044343 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:59:22.044353 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:59:22.044358 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:59:22.044362 | orchestrator | 2025-06-02 19:59:22.044367 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-02 19:59:22.044372 | orchestrator | Monday 02 June 2025 19:58:25 +0000 (0:00:02.456) 0:00:07.987 *********** 2025-06-02 19:59:22.044376 | orchestrator | ok: [testbed-manager] 2025-06-02 19:59:22.044380 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:59:22.044385 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:59:22.044388 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:59:22.044392 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:59:22.044396 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:59:22.044399 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:59:22.044403 | orchestrator | 2025-06-02 19:59:22.044407 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-02 19:59:22.044411 | orchestrator | Monday 02 June 2025 19:58:30 +0000 (0:00:04.142) 0:00:12.129 *********** 2025-06-02 19:59:22.044415 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:59:22.044421 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.044427 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:59:22.044432 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:59:22.044438 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:59:22.044444 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:59:22.044451 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:59:22.044457 | orchestrator | 2025-06-02 19:59:22.044464 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-02 19:59:22.044468 | orchestrator | Monday 02 June 2025 19:58:33 +0000 (0:00:03.597) 0:00:15.729 *********** 2025-06-02 19:59:22.044472 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.044476 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:59:22.044479 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:59:22.044483 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:59:22.044487 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:59:22.044491 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:59:22.044495 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:59:22.044499 | orchestrator | 2025-06-02 19:59:22.044502 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-02 19:59:22.044506 | orchestrator | Monday 02 June 2025 19:58:44 +0000 (0:00:10.405) 0:00:26.135 *********** 2025-06-02 19:59:22.044510 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:59:22.044514 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.044518 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:59:22.044522 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:59:22.044525 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:59:22.044529 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:59:22.044533 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:59:22.044537 | orchestrator | 2025-06-02 19:59:22.044540 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-02 19:59:22.044544 | orchestrator | Monday 02 June 2025 19:59:00 +0000 (0:00:16.060) 0:00:42.196 *********** 2025-06-02 19:59:22.044549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:59:22.044562 | orchestrator | 2025-06-02 19:59:22.044566 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-02 19:59:22.044570 | orchestrator | Monday 02 June 2025 19:59:01 +0000 (0:00:01.556) 0:00:43.752 *********** 2025-06-02 19:59:22.044574 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-02 19:59:22.044578 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-02 19:59:22.044582 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-02 19:59:22.044586 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-02 19:59:22.044589 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-02 19:59:22.044593 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-02 19:59:22.044597 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-02 19:59:22.044601 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-02 19:59:22.044605 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-02 19:59:22.044609 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-02 19:59:22.044612 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-02 19:59:22.044616 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-02 19:59:22.044620 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-02 19:59:22.044624 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-02 19:59:22.044628 | orchestrator | 2025-06-02 19:59:22.044632 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-02 19:59:22.044637 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:04.629) 0:00:48.381 *********** 2025-06-02 19:59:22.044641 | orchestrator | ok: [testbed-manager] 2025-06-02 19:59:22.044644 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:59:22.044648 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:59:22.044652 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:59:22.044678 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:59:22.044682 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:59:22.044686 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:59:22.044689 | orchestrator | 2025-06-02 19:59:22.044693 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-02 19:59:22.044697 | orchestrator | Monday 02 June 2025 19:59:07 +0000 (0:00:01.173) 0:00:49.555 *********** 2025-06-02 19:59:22.044701 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.044704 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:59:22.044708 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:59:22.044712 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:59:22.044715 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:59:22.044719 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:59:22.044723 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:59:22.044726 | orchestrator | 2025-06-02 19:59:22.044730 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-02 19:59:22.044738 | orchestrator | Monday 02 June 2025 19:59:09 +0000 (0:00:02.123) 0:00:51.678 *********** 2025-06-02 19:59:22.044742 | orchestrator | ok: [testbed-manager] 2025-06-02 19:59:22.044746 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:59:22.044750 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:59:22.044754 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:59:22.044757 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:59:22.044761 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:59:22.044765 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:59:22.044768 | orchestrator | 2025-06-02 19:59:22.044772 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-02 19:59:22.044776 | orchestrator | Monday 02 June 2025 19:59:11 +0000 (0:00:01.960) 0:00:53.639 *********** 2025-06-02 19:59:22.044780 | orchestrator | ok: [testbed-manager] 2025-06-02 19:59:22.044785 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:59:22.044792 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:59:22.044803 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:59:22.044809 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:59:22.044815 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:59:22.044821 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:59:22.044827 | orchestrator | 2025-06-02 19:59:22.044833 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-02 19:59:22.044839 | orchestrator | Monday 02 June 2025 19:59:13 +0000 (0:00:01.844) 0:00:55.483 *********** 2025-06-02 19:59:22.044846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-02 19:59:22.044853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:59:22.044860 | orchestrator | 2025-06-02 19:59:22.044867 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-02 19:59:22.044875 | orchestrator | Monday 02 June 2025 19:59:15 +0000 (0:00:02.096) 0:00:57.579 *********** 2025-06-02 19:59:22.044882 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.044888 | orchestrator | 2025-06-02 19:59:22.044895 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-02 19:59:22.044901 | orchestrator | Monday 02 June 2025 19:59:18 +0000 (0:00:02.698) 0:01:00.278 *********** 2025-06-02 19:59:22.044907 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:59:22.044914 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:59:22.044920 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:59:22.044926 | orchestrator | changed: [testbed-manager] 2025-06-02 19:59:22.044933 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:59:22.044939 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:59:22.044945 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:59:22.044951 | orchestrator | 2025-06-02 19:59:22.044958 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:59:22.044964 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:59:22.044971 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:59:22.044978 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:59:22.044984 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:59:22.044990 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:59:22.044996 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:59:22.045003 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:59:22.045009 | orchestrator | 2025-06-02 19:59:22.045016 | orchestrator | 2025-06-02 19:59:22.045026 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:59:22.045032 | orchestrator | Monday 02 June 2025 19:59:21 +0000 (0:00:03.326) 0:01:03.605 *********** 2025-06-02 19:59:22.045037 | orchestrator | =============================================================================== 2025-06-02 19:59:22.045043 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.06s 2025-06-02 19:59:22.045049 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.40s 2025-06-02 19:59:22.045056 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.63s 2025-06-02 19:59:22.045062 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.14s 2025-06-02 19:59:22.045074 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.60s 2025-06-02 19:59:22.045080 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.33s 2025-06-02 19:59:22.045086 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.70s 2025-06-02 19:59:22.045093 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.61s 2025-06-02 19:59:22.045099 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.46s 2025-06-02 19:59:22.045107 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.40s 2025-06-02 19:59:22.045111 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.12s 2025-06-02 19:59:22.045119 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.10s 2025-06-02 19:59:22.045123 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.96s 2025-06-02 19:59:22.045127 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.84s 2025-06-02 19:59:22.045131 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.56s 2025-06-02 19:59:22.045134 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.17s 2025-06-02 19:59:22.045865 | orchestrator | 2025-06-02 19:59:22 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:22.047736 | orchestrator | 2025-06-02 19:59:22 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:22.048887 | orchestrator | 2025-06-02 19:59:22 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:22.048912 | orchestrator | 2025-06-02 19:59:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:25.089462 | orchestrator | 2025-06-02 19:59:25 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:25.089596 | orchestrator | 2025-06-02 19:59:25 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:25.090389 | orchestrator | 2025-06-02 19:59:25 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:25.091338 | orchestrator | 2025-06-02 19:59:25 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:25.091359 | orchestrator | 2025-06-02 19:59:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:28.171361 | orchestrator | 2025-06-02 19:59:28 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:28.171453 | orchestrator | 2025-06-02 19:59:28 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:28.172954 | orchestrator | 2025-06-02 19:59:28 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:28.174148 | orchestrator | 2025-06-02 19:59:28 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:28.174324 | orchestrator | 2025-06-02 19:59:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:31.206429 | orchestrator | 2025-06-02 19:59:31 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:31.207182 | orchestrator | 2025-06-02 19:59:31 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:31.208203 | orchestrator | 2025-06-02 19:59:31 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:31.210687 | orchestrator | 2025-06-02 19:59:31 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:31.210731 | orchestrator | 2025-06-02 19:59:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:34.247723 | orchestrator | 2025-06-02 19:59:34 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:34.247864 | orchestrator | 2025-06-02 19:59:34 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:34.249123 | orchestrator | 2025-06-02 19:59:34 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:34.250005 | orchestrator | 2025-06-02 19:59:34 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:34.250131 | orchestrator | 2025-06-02 19:59:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:37.289593 | orchestrator | 2025-06-02 19:59:37 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:37.293766 | orchestrator | 2025-06-02 19:59:37 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:37.295187 | orchestrator | 2025-06-02 19:59:37 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:37.296762 | orchestrator | 2025-06-02 19:59:37 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:37.297223 | orchestrator | 2025-06-02 19:59:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:40.353111 | orchestrator | 2025-06-02 19:59:40 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:40.355868 | orchestrator | 2025-06-02 19:59:40 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:40.358565 | orchestrator | 2025-06-02 19:59:40 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:40.361641 | orchestrator | 2025-06-02 19:59:40 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:40.361885 | orchestrator | 2025-06-02 19:59:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:43.419127 | orchestrator | 2025-06-02 19:59:43 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:43.425634 | orchestrator | 2025-06-02 19:59:43 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:43.429477 | orchestrator | 2025-06-02 19:59:43 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:43.432588 | orchestrator | 2025-06-02 19:59:43 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:43.432630 | orchestrator | 2025-06-02 19:59:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:46.481506 | orchestrator | 2025-06-02 19:59:46 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:46.482673 | orchestrator | 2025-06-02 19:59:46 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:46.484321 | orchestrator | 2025-06-02 19:59:46 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:46.485507 | orchestrator | 2025-06-02 19:59:46 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:46.486360 | orchestrator | 2025-06-02 19:59:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:49.560255 | orchestrator | 2025-06-02 19:59:49 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state STARTED 2025-06-02 19:59:49.560371 | orchestrator | 2025-06-02 19:59:49 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:49.560378 | orchestrator | 2025-06-02 19:59:49 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:49.560741 | orchestrator | 2025-06-02 19:59:49 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:49.560878 | orchestrator | 2025-06-02 19:59:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:52.603888 | orchestrator | 2025-06-02 19:59:52 | INFO  | Task e578e991-a512-4999-bb27-d6462c89a297 is in state SUCCESS 2025-06-02 19:59:52.605659 | orchestrator | 2025-06-02 19:59:52 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:52.606951 | orchestrator | 2025-06-02 19:59:52 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:52.608798 | orchestrator | 2025-06-02 19:59:52 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:52.608834 | orchestrator | 2025-06-02 19:59:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:55.662754 | orchestrator | 2025-06-02 19:59:55 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:55.664735 | orchestrator | 2025-06-02 19:59:55 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:55.666811 | orchestrator | 2025-06-02 19:59:55 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:55.666864 | orchestrator | 2025-06-02 19:59:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:58.722625 | orchestrator | 2025-06-02 19:59:58 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 19:59:58.734233 | orchestrator | 2025-06-02 19:59:58 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 19:59:58.740041 | orchestrator | 2025-06-02 19:59:58 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 19:59:58.740411 | orchestrator | 2025-06-02 19:59:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:01.797371 | orchestrator | 2025-06-02 20:00:01 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:01.799839 | orchestrator | 2025-06-02 20:00:01 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:01.801844 | orchestrator | 2025-06-02 20:00:01 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:01.801895 | orchestrator | 2025-06-02 20:00:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:04.831738 | orchestrator | 2025-06-02 20:00:04 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:04.832646 | orchestrator | 2025-06-02 20:00:04 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:04.833735 | orchestrator | 2025-06-02 20:00:04 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:04.833757 | orchestrator | 2025-06-02 20:00:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:07.876776 | orchestrator | 2025-06-02 20:00:07 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:07.876864 | orchestrator | 2025-06-02 20:00:07 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:07.879343 | orchestrator | 2025-06-02 20:00:07 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:07.879491 | orchestrator | 2025-06-02 20:00:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:10.921702 | orchestrator | 2025-06-02 20:00:10 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:10.924043 | orchestrator | 2025-06-02 20:00:10 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:10.924138 | orchestrator | 2025-06-02 20:00:10 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:10.924150 | orchestrator | 2025-06-02 20:00:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:13.963363 | orchestrator | 2025-06-02 20:00:13 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:13.963415 | orchestrator | 2025-06-02 20:00:13 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:13.964639 | orchestrator | 2025-06-02 20:00:13 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:13.964663 | orchestrator | 2025-06-02 20:00:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:17.016496 | orchestrator | 2025-06-02 20:00:17 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:17.017749 | orchestrator | 2025-06-02 20:00:17 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:17.020009 | orchestrator | 2025-06-02 20:00:17 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:17.020079 | orchestrator | 2025-06-02 20:00:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:20.062349 | orchestrator | 2025-06-02 20:00:20 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:20.064023 | orchestrator | 2025-06-02 20:00:20 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:20.065994 | orchestrator | 2025-06-02 20:00:20 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:20.066091 | orchestrator | 2025-06-02 20:00:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:23.107764 | orchestrator | 2025-06-02 20:00:23 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:23.107870 | orchestrator | 2025-06-02 20:00:23 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:23.108348 | orchestrator | 2025-06-02 20:00:23 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:23.108414 | orchestrator | 2025-06-02 20:00:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:26.146798 | orchestrator | 2025-06-02 20:00:26 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:26.147203 | orchestrator | 2025-06-02 20:00:26 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:26.149733 | orchestrator | 2025-06-02 20:00:26 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:26.149805 | orchestrator | 2025-06-02 20:00:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:29.184054 | orchestrator | 2025-06-02 20:00:29 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:29.186437 | orchestrator | 2025-06-02 20:00:29 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:29.188286 | orchestrator | 2025-06-02 20:00:29 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:29.188341 | orchestrator | 2025-06-02 20:00:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:32.236384 | orchestrator | 2025-06-02 20:00:32 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:32.236639 | orchestrator | 2025-06-02 20:00:32 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:32.237334 | orchestrator | 2025-06-02 20:00:32 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:32.237411 | orchestrator | 2025-06-02 20:00:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:35.281813 | orchestrator | 2025-06-02 20:00:35 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:35.281939 | orchestrator | 2025-06-02 20:00:35 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:35.281961 | orchestrator | 2025-06-02 20:00:35 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state STARTED 2025-06-02 20:00:35.281980 | orchestrator | 2025-06-02 20:00:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:38.321968 | orchestrator | 2025-06-02 20:00:38 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state STARTED 2025-06-02 20:00:38.322334 | orchestrator | 2025-06-02 20:00:38 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:00:38.323039 | orchestrator | 2025-06-02 20:00:38 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:38.323556 | orchestrator | 2025-06-02 20:00:38 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:00:38.323993 | orchestrator | 2025-06-02 20:00:38 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:38.325093 | orchestrator | 2025-06-02 20:00:38 | INFO  | Task 1192ec47-1741-424b-89cc-f7747cb6bc86 is in state STARTED 2025-06-02 20:00:38.327673 | orchestrator | 2025-06-02 20:00:38 | INFO  | Task 0304de24-585b-4a8d-a000-40ed629ca824 is in state SUCCESS 2025-06-02 20:00:38.329390 | orchestrator | 2025-06-02 20:00:38.329440 | orchestrator | 2025-06-02 20:00:38.329457 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-02 20:00:38.329503 | orchestrator | 2025-06-02 20:00:38.329522 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-02 20:00:38.329595 | orchestrator | Monday 02 June 2025 19:58:40 +0000 (0:00:00.273) 0:00:00.273 *********** 2025-06-02 20:00:38.329613 | orchestrator | ok: [testbed-manager] 2025-06-02 20:00:38.329633 | orchestrator | 2025-06-02 20:00:38.329651 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-02 20:00:38.329747 | orchestrator | Monday 02 June 2025 19:58:40 +0000 (0:00:00.669) 0:00:00.942 *********** 2025-06-02 20:00:38.329764 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-02 20:00:38.329774 | orchestrator | 2025-06-02 20:00:38.329784 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-02 20:00:38.329795 | orchestrator | Monday 02 June 2025 19:58:41 +0000 (0:00:00.573) 0:00:01.515 *********** 2025-06-02 20:00:38.329805 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:38.329815 | orchestrator | 2025-06-02 20:00:38.329825 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-02 20:00:38.329834 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:01.324) 0:00:02.840 *********** 2025-06-02 20:00:38.329845 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-02 20:00:38.329855 | orchestrator | ok: [testbed-manager] 2025-06-02 20:00:38.329865 | orchestrator | 2025-06-02 20:00:38.329875 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-02 20:00:38.329884 | orchestrator | Monday 02 June 2025 19:59:48 +0000 (0:01:05.576) 0:01:08.416 *********** 2025-06-02 20:00:38.329894 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:38.329904 | orchestrator | 2025-06-02 20:00:38.329913 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:00:38.329924 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:00:38.329936 | orchestrator | 2025-06-02 20:00:38.329946 | orchestrator | 2025-06-02 20:00:38.329989 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:00:38.330002 | orchestrator | Monday 02 June 2025 19:59:51 +0000 (0:00:03.585) 0:01:12.002 *********** 2025-06-02 20:00:38.330061 | orchestrator | =============================================================================== 2025-06-02 20:00:38.330084 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 65.58s 2025-06-02 20:00:38.330124 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.59s 2025-06-02 20:00:38.330143 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.32s 2025-06-02 20:00:38.330158 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.67s 2025-06-02 20:00:38.330176 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.57s 2025-06-02 20:00:38.330191 | orchestrator | 2025-06-02 20:00:38.330209 | orchestrator | 2025-06-02 20:00:38.330226 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-02 20:00:38.330243 | orchestrator | 2025-06-02 20:00:38.330285 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 20:00:38.330302 | orchestrator | Monday 02 June 2025 19:58:11 +0000 (0:00:00.253) 0:00:00.253 *********** 2025-06-02 20:00:38.330320 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:00:38.330339 | orchestrator | 2025-06-02 20:00:38.330478 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-02 20:00:38.330490 | orchestrator | Monday 02 June 2025 19:58:12 +0000 (0:00:01.238) 0:00:01.492 *********** 2025-06-02 20:00:38.330499 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 20:00:38.330509 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 20:00:38.330519 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 20:00:38.330535 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 20:00:38.330553 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 20:00:38.330569 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 20:00:38.330586 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 20:00:38.330604 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 20:00:38.330620 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 20:00:38.330635 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 20:00:38.330645 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 20:00:38.330656 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 20:00:38.330665 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 20:00:38.330675 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 20:00:38.330684 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 20:00:38.330694 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 20:00:38.330719 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 20:00:38.330729 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 20:00:38.330739 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 20:00:38.330749 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 20:00:38.330771 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 20:00:38.330781 | orchestrator | 2025-06-02 20:00:38.330791 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 20:00:38.330801 | orchestrator | Monday 02 June 2025 19:58:16 +0000 (0:00:03.998) 0:00:05.491 *********** 2025-06-02 20:00:38.330811 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:00:38.330822 | orchestrator | 2025-06-02 20:00:38.330832 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-02 20:00:38.330842 | orchestrator | Monday 02 June 2025 19:58:17 +0000 (0:00:01.114) 0:00:06.605 *********** 2025-06-02 20:00:38.330856 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.330872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.330882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.330921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.330933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.330960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.330987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.330999 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331046 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331090 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.331188 | orchestrator | 2025-06-02 20:00:38.331199 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-02 20:00:38.331209 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:04.738) 0:00:11.343 *********** 2025-06-02 20:00:38.331232 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331243 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331281 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331293 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:00:38.331308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331388 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:38.331398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331433 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:38.331443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331480 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:38.331490 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:38.331506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331566 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:38.331589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331642 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:38.331667 | orchestrator | 2025-06-02 20:00:38.331678 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-02 20:00:38.331688 | orchestrator | Monday 02 June 2025 19:58:24 +0000 (0:00:01.829) 0:00:13.173 *********** 2025-06-02 20:00:38.331698 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331708 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331725 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331736 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:00:38.331746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331781 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:38.331791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331827 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:38.331837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331874 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:38.331884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331920 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:38.331930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.331940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.331973 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:38.331983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 20:00:38.332002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.332016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.332027 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:38.332036 | orchestrator | 2025-06-02 20:00:38.332046 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-02 20:00:38.332062 | orchestrator | Monday 02 June 2025 19:58:26 +0000 (0:00:02.584) 0:00:15.757 *********** 2025-06-02 20:00:38.332072 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:00:38.332081 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:38.332091 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:38.332101 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:38.332110 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:38.332120 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:38.332129 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:38.332139 | orchestrator | 2025-06-02 20:00:38.332148 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-02 20:00:38.332158 | orchestrator | Monday 02 June 2025 19:58:27 +0000 (0:00:01.010) 0:00:16.768 *********** 2025-06-02 20:00:38.332168 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:00:38.332177 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:38.332187 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:38.332196 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:38.332206 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:38.332215 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:38.332225 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:38.332234 | orchestrator | 2025-06-02 20:00:38.332244 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-02 20:00:38.332295 | orchestrator | Monday 02 June 2025 19:58:29 +0000 (0:00:01.152) 0:00:17.920 *********** 2025-06-02 20:00:38.332306 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.332317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.332334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.332345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.332355 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.332399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.332409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332419 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332435 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.332445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332507 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332594 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.332611 | orchestrator | 2025-06-02 20:00:38.332629 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-02 20:00:38.332652 | orchestrator | Monday 02 June 2025 19:58:35 +0000 (0:00:06.166) 0:00:24.086 *********** 2025-06-02 20:00:38.332669 | orchestrator | [WARNING]: Skipped 2025-06-02 20:00:38.332681 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-02 20:00:38.332691 | orchestrator | to this access issue: 2025-06-02 20:00:38.332701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-02 20:00:38.332710 | orchestrator | directory 2025-06-02 20:00:38.332720 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:00:38.332730 | orchestrator | 2025-06-02 20:00:38.332740 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-02 20:00:38.332749 | orchestrator | Monday 02 June 2025 19:58:37 +0000 (0:00:02.130) 0:00:26.217 *********** 2025-06-02 20:00:38.332759 | orchestrator | [WARNING]: Skipped 2025-06-02 20:00:38.332769 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-02 20:00:38.332779 | orchestrator | to this access issue: 2025-06-02 20:00:38.332788 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-02 20:00:38.332798 | orchestrator | directory 2025-06-02 20:00:38.332808 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:00:38.332817 | orchestrator | 2025-06-02 20:00:38.332827 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-02 20:00:38.332837 | orchestrator | Monday 02 June 2025 19:58:38 +0000 (0:00:00.817) 0:00:27.034 *********** 2025-06-02 20:00:38.332847 | orchestrator | [WARNING]: Skipped 2025-06-02 20:00:38.332856 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-02 20:00:38.332866 | orchestrator | to this access issue: 2025-06-02 20:00:38.332876 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-02 20:00:38.332885 | orchestrator | directory 2025-06-02 20:00:38.332895 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:00:38.332905 | orchestrator | 2025-06-02 20:00:38.332915 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-02 20:00:38.332924 | orchestrator | Monday 02 June 2025 19:58:39 +0000 (0:00:00.884) 0:00:27.918 *********** 2025-06-02 20:00:38.332934 | orchestrator | [WARNING]: Skipped 2025-06-02 20:00:38.332944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-02 20:00:38.332954 | orchestrator | to this access issue: 2025-06-02 20:00:38.332963 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-02 20:00:38.332973 | orchestrator | directory 2025-06-02 20:00:38.332983 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:00:38.332992 | orchestrator | 2025-06-02 20:00:38.333002 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-02 20:00:38.333012 | orchestrator | Monday 02 June 2025 19:58:39 +0000 (0:00:00.696) 0:00:28.615 *********** 2025-06-02 20:00:38.333021 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:38.333031 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:38.333046 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:38.333056 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:38.333066 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:38.333075 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:38.333084 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:38.333094 | orchestrator | 2025-06-02 20:00:38.333104 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-02 20:00:38.333113 | orchestrator | Monday 02 June 2025 19:58:44 +0000 (0:00:04.316) 0:00:32.931 *********** 2025-06-02 20:00:38.333123 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 20:00:38.333134 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 20:00:38.333144 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 20:00:38.333159 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 20:00:38.333170 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 20:00:38.333179 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 20:00:38.333189 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 20:00:38.333199 | orchestrator | 2025-06-02 20:00:38.333208 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-02 20:00:38.333218 | orchestrator | Monday 02 June 2025 19:58:47 +0000 (0:00:03.488) 0:00:36.419 *********** 2025-06-02 20:00:38.333228 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:38.333238 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:38.333271 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:38.333282 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:38.333292 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:38.333301 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:38.333311 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:38.333321 | orchestrator | 2025-06-02 20:00:38.333331 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-02 20:00:38.333341 | orchestrator | Monday 02 June 2025 19:58:51 +0000 (0:00:03.793) 0:00:40.213 *********** 2025-06-02 20:00:38.333356 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333367 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.333377 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.333404 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.333430 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.333445 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.333470 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.333480 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.333497 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.333524 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.333542 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.333582 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.333599 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:00:38.333652 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.333665 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.333675 | orchestrator | 2025-06-02 20:00:38.333685 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-02 20:00:38.333695 | orchestrator | Monday 02 June 2025 19:58:53 +0000 (0:00:02.248) 0:00:42.462 *********** 2025-06-02 20:00:38.333705 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 20:00:38.333714 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 20:00:38.333724 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 20:00:38.333744 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 20:00:38.333754 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 20:00:38.333764 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 20:00:38.333773 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 20:00:38.333783 | orchestrator | 2025-06-02 20:00:38.333793 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-02 20:00:38.333802 | orchestrator | Monday 02 June 2025 19:58:58 +0000 (0:00:04.472) 0:00:46.934 *********** 2025-06-02 20:00:38.333812 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 20:00:38.333822 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 20:00:38.333831 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 20:00:38.333841 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 20:00:38.333850 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 20:00:38.333860 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 20:00:38.333870 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 20:00:38.333879 | orchestrator | 2025-06-02 20:00:38.333889 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-02 20:00:38.333898 | orchestrator | Monday 02 June 2025 19:59:00 +0000 (0:00:02.458) 0:00:49.393 *********** 2025-06-02 20:00:38.333920 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.333963 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.333980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.333991 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.334001 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.334056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334070 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334120 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 20:00:38.334137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334274 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:00:38.334308 | orchestrator | 2025-06-02 20:00:38.334333 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-02 20:00:38.334350 | orchestrator | Monday 02 June 2025 19:59:03 +0000 (0:00:02.938) 0:00:52.332 *********** 2025-06-02 20:00:38.334367 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:38.334382 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:38.334398 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:38.334413 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:38.334429 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:38.334446 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:38.334461 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:38.334475 | orchestrator | 2025-06-02 20:00:38.334490 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-02 20:00:38.334516 | orchestrator | Monday 02 June 2025 19:59:04 +0000 (0:00:01.473) 0:00:53.805 *********** 2025-06-02 20:00:38.334531 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:38.334548 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:38.334563 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:38.334577 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:38.334592 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:38.334607 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:38.334622 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:38.334637 | orchestrator | 2025-06-02 20:00:38.334652 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 20:00:38.334667 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:01.187) 0:00:54.993 *********** 2025-06-02 20:00:38.334682 | orchestrator | 2025-06-02 20:00:38.334697 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 20:00:38.334711 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:00.196) 0:00:55.190 *********** 2025-06-02 20:00:38.334726 | orchestrator | 2025-06-02 20:00:38.334741 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 20:00:38.334756 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:00.072) 0:00:55.262 *********** 2025-06-02 20:00:38.334771 | orchestrator | 2025-06-02 20:00:38.334786 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 20:00:38.334801 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:00.079) 0:00:55.342 *********** 2025-06-02 20:00:38.334817 | orchestrator | 2025-06-02 20:00:38.334838 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 20:00:38.334854 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:00.067) 0:00:55.409 *********** 2025-06-02 20:00:38.334869 | orchestrator | 2025-06-02 20:00:38.334884 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 20:00:38.334899 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:00.074) 0:00:55.483 *********** 2025-06-02 20:00:38.334914 | orchestrator | 2025-06-02 20:00:38.334928 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 20:00:38.334943 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:00.057) 0:00:55.541 *********** 2025-06-02 20:00:38.334958 | orchestrator | 2025-06-02 20:00:38.334973 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-02 20:00:38.334988 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:00.079) 0:00:55.621 *********** 2025-06-02 20:00:38.335003 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:38.335018 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:38.335033 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:38.335047 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:38.335062 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:38.335077 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:38.335092 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:38.335108 | orchestrator | 2025-06-02 20:00:38.335123 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-02 20:00:38.335137 | orchestrator | Monday 02 June 2025 19:59:45 +0000 (0:00:39.010) 0:01:34.631 *********** 2025-06-02 20:00:38.335152 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:38.335167 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:38.335182 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:38.335197 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:38.335213 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:38.335229 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:38.335270 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:38.335287 | orchestrator | 2025-06-02 20:00:38.335305 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-02 20:00:38.335322 | orchestrator | Monday 02 June 2025 20:00:23 +0000 (0:00:37.241) 0:02:11.873 *********** 2025-06-02 20:00:38.335340 | orchestrator | ok: [testbed-manager] 2025-06-02 20:00:38.335365 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:38.335379 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:38.335388 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:00:38.335398 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:38.335408 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:00:38.335417 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:00:38.335427 | orchestrator | 2025-06-02 20:00:38.335437 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-02 20:00:38.335447 | orchestrator | Monday 02 June 2025 20:00:25 +0000 (0:00:02.204) 0:02:14.078 *********** 2025-06-02 20:00:38.335456 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:38.335466 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:38.335476 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:38.335492 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:38.335508 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:38.335524 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:38.335541 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:38.335555 | orchestrator | 2025-06-02 20:00:38.335571 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:00:38.335590 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 20:00:38.335609 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 20:00:38.335635 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 20:00:38.335654 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 20:00:38.335671 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 20:00:38.335686 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 20:00:38.335704 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 20:00:38.335721 | orchestrator | 2025-06-02 20:00:38.335738 | orchestrator | 2025-06-02 20:00:38.335755 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:00:38.335773 | orchestrator | Monday 02 June 2025 20:00:34 +0000 (0:00:09.636) 0:02:23.714 *********** 2025-06-02 20:00:38.335789 | orchestrator | =============================================================================== 2025-06-02 20:00:38.335802 | orchestrator | common : Restart fluentd container ------------------------------------- 39.01s 2025-06-02 20:00:38.335812 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 37.24s 2025-06-02 20:00:38.335822 | orchestrator | common : Restart cron container ----------------------------------------- 9.64s 2025-06-02 20:00:38.335832 | orchestrator | common : Copying over config.json files for services -------------------- 6.17s 2025-06-02 20:00:38.335841 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.74s 2025-06-02 20:00:38.335851 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.47s 2025-06-02 20:00:38.335868 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.32s 2025-06-02 20:00:38.335878 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.00s 2025-06-02 20:00:38.335888 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.79s 2025-06-02 20:00:38.335898 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.49s 2025-06-02 20:00:38.335915 | orchestrator | common : Check common containers ---------------------------------------- 2.94s 2025-06-02 20:00:38.335925 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.58s 2025-06-02 20:00:38.335934 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.46s 2025-06-02 20:00:38.335944 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.25s 2025-06-02 20:00:38.335954 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.20s 2025-06-02 20:00:38.335964 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.13s 2025-06-02 20:00:38.335973 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.83s 2025-06-02 20:00:38.335983 | orchestrator | common : Creating log volume -------------------------------------------- 1.47s 2025-06-02 20:00:38.335992 | orchestrator | common : include_tasks -------------------------------------------------- 1.24s 2025-06-02 20:00:38.336002 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.19s 2025-06-02 20:00:38.336012 | orchestrator | 2025-06-02 20:00:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:41.366134 | orchestrator | 2025-06-02 20:00:41 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state STARTED 2025-06-02 20:00:41.366281 | orchestrator | 2025-06-02 20:00:41 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:00:41.366302 | orchestrator | 2025-06-02 20:00:41 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:41.366596 | orchestrator | 2025-06-02 20:00:41 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:00:41.367019 | orchestrator | 2025-06-02 20:00:41 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:41.367462 | orchestrator | 2025-06-02 20:00:41 | INFO  | Task 1192ec47-1741-424b-89cc-f7747cb6bc86 is in state STARTED 2025-06-02 20:00:41.367488 | orchestrator | 2025-06-02 20:00:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:44.418377 | orchestrator | 2025-06-02 20:00:44 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state STARTED 2025-06-02 20:00:44.418471 | orchestrator | 2025-06-02 20:00:44 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:00:44.419792 | orchestrator | 2025-06-02 20:00:44 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:44.420050 | orchestrator | 2025-06-02 20:00:44 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:00:44.420709 | orchestrator | 2025-06-02 20:00:44 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:44.421420 | orchestrator | 2025-06-02 20:00:44 | INFO  | Task 1192ec47-1741-424b-89cc-f7747cb6bc86 is in state STARTED 2025-06-02 20:00:44.421449 | orchestrator | 2025-06-02 20:00:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:47.454385 | orchestrator | 2025-06-02 20:00:47 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state STARTED 2025-06-02 20:00:47.454495 | orchestrator | 2025-06-02 20:00:47 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:00:47.455797 | orchestrator | 2025-06-02 20:00:47 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:47.455860 | orchestrator | 2025-06-02 20:00:47 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:00:47.456407 | orchestrator | 2025-06-02 20:00:47 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:47.457053 | orchestrator | 2025-06-02 20:00:47 | INFO  | Task 1192ec47-1741-424b-89cc-f7747cb6bc86 is in state STARTED 2025-06-02 20:00:47.457390 | orchestrator | 2025-06-02 20:00:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:50.493108 | orchestrator | 2025-06-02 20:00:50 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state STARTED 2025-06-02 20:00:50.493193 | orchestrator | 2025-06-02 20:00:50 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:00:50.493825 | orchestrator | 2025-06-02 20:00:50 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:50.496021 | orchestrator | 2025-06-02 20:00:50 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:00:50.496659 | orchestrator | 2025-06-02 20:00:50 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:50.497385 | orchestrator | 2025-06-02 20:00:50 | INFO  | Task 1192ec47-1741-424b-89cc-f7747cb6bc86 is in state STARTED 2025-06-02 20:00:50.497407 | orchestrator | 2025-06-02 20:00:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:53.537561 | orchestrator | 2025-06-02 20:00:53 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state STARTED 2025-06-02 20:00:53.537682 | orchestrator | 2025-06-02 20:00:53 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:00:53.537700 | orchestrator | 2025-06-02 20:00:53 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:53.537711 | orchestrator | 2025-06-02 20:00:53 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:00:53.537722 | orchestrator | 2025-06-02 20:00:53 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:53.538560 | orchestrator | 2025-06-02 20:00:53 | INFO  | Task 1192ec47-1741-424b-89cc-f7747cb6bc86 is in state STARTED 2025-06-02 20:00:53.538640 | orchestrator | 2025-06-02 20:00:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:56.589172 | orchestrator | 2025-06-02 20:00:56 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state STARTED 2025-06-02 20:00:56.589324 | orchestrator | 2025-06-02 20:00:56 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:00:56.589852 | orchestrator | 2025-06-02 20:00:56 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:56.590668 | orchestrator | 2025-06-02 20:00:56 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:00:56.591058 | orchestrator | 2025-06-02 20:00:56 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:56.591950 | orchestrator | 2025-06-02 20:00:56 | INFO  | Task 1192ec47-1741-424b-89cc-f7747cb6bc86 is in state STARTED 2025-06-02 20:00:56.592009 | orchestrator | 2025-06-02 20:00:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:59.635364 | orchestrator | 2025-06-02 20:00:59 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state STARTED 2025-06-02 20:00:59.635458 | orchestrator | 2025-06-02 20:00:59 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:00:59.635905 | orchestrator | 2025-06-02 20:00:59 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:00:59.636448 | orchestrator | 2025-06-02 20:00:59 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:00:59.637119 | orchestrator | 2025-06-02 20:00:59 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:00:59.637659 | orchestrator | 2025-06-02 20:00:59 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:00:59.638170 | orchestrator | 2025-06-02 20:00:59 | INFO  | Task 1192ec47-1741-424b-89cc-f7747cb6bc86 is in state SUCCESS 2025-06-02 20:00:59.638188 | orchestrator | 2025-06-02 20:00:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:02.667022 | orchestrator | 2025-06-02 20:01:02 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state STARTED 2025-06-02 20:01:02.668582 | orchestrator | 2025-06-02 20:01:02 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:02.670319 | orchestrator | 2025-06-02 20:01:02 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:02.671919 | orchestrator | 2025-06-02 20:01:02 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:02.672517 | orchestrator | 2025-06-02 20:01:02 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:02.674200 | orchestrator | 2025-06-02 20:01:02 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:02.674270 | orchestrator | 2025-06-02 20:01:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:05.706316 | orchestrator | 2025-06-02 20:01:05 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state STARTED 2025-06-02 20:01:05.706500 | orchestrator | 2025-06-02 20:01:05 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:05.707010 | orchestrator | 2025-06-02 20:01:05 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:05.707658 | orchestrator | 2025-06-02 20:01:05 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:05.708403 | orchestrator | 2025-06-02 20:01:05 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:05.709087 | orchestrator | 2025-06-02 20:01:05 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:05.709111 | orchestrator | 2025-06-02 20:01:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:08.740095 | orchestrator | 2025-06-02 20:01:08.740232 | orchestrator | 2025-06-02 20:01:08.740357 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:01:08.740379 | orchestrator | 2025-06-02 20:01:08.740399 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:01:08.740418 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.365) 0:00:00.365 *********** 2025-06-02 20:01:08.740436 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:08.740626 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:08.740655 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:08.740674 | orchestrator | 2025-06-02 20:01:08.740693 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:01:08.740712 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.400) 0:00:00.766 *********** 2025-06-02 20:01:08.740731 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-02 20:01:08.740749 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-02 20:01:08.740767 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-02 20:01:08.740785 | orchestrator | 2025-06-02 20:01:08.740803 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-02 20:01:08.740821 | orchestrator | 2025-06-02 20:01:08.740839 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-02 20:01:08.740857 | orchestrator | Monday 02 June 2025 20:00:43 +0000 (0:00:00.657) 0:00:01.424 *********** 2025-06-02 20:01:08.740875 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:01:08.740893 | orchestrator | 2025-06-02 20:01:08.740914 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-02 20:01:08.740970 | orchestrator | Monday 02 June 2025 20:00:44 +0000 (0:00:01.017) 0:00:02.441 *********** 2025-06-02 20:01:08.740991 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 20:01:08.741011 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 20:01:08.741029 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 20:01:08.741047 | orchestrator | 2025-06-02 20:01:08.741066 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-02 20:01:08.741077 | orchestrator | Monday 02 June 2025 20:00:45 +0000 (0:00:01.063) 0:00:03.505 *********** 2025-06-02 20:01:08.741088 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 20:01:08.741100 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 20:01:08.741111 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 20:01:08.741122 | orchestrator | 2025-06-02 20:01:08.741133 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-02 20:01:08.741144 | orchestrator | Monday 02 June 2025 20:00:47 +0000 (0:00:02.734) 0:00:06.240 *********** 2025-06-02 20:01:08.741155 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:08.741166 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:08.741177 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:08.741188 | orchestrator | 2025-06-02 20:01:08.741199 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-02 20:01:08.741210 | orchestrator | Monday 02 June 2025 20:00:50 +0000 (0:00:02.536) 0:00:08.776 *********** 2025-06-02 20:01:08.741257 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:08.741268 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:08.741279 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:08.741290 | orchestrator | 2025-06-02 20:01:08.741301 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:01:08.741312 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:08.741325 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:08.741336 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:08.741347 | orchestrator | 2025-06-02 20:01:08.741358 | orchestrator | 2025-06-02 20:01:08.741369 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:01:08.741380 | orchestrator | Monday 02 June 2025 20:00:57 +0000 (0:00:07.327) 0:00:16.104 *********** 2025-06-02 20:01:08.741391 | orchestrator | =============================================================================== 2025-06-02 20:01:08.741406 | orchestrator | memcached : Restart memcached container --------------------------------- 7.33s 2025-06-02 20:01:08.741440 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.73s 2025-06-02 20:01:08.741451 | orchestrator | memcached : Check memcached container ----------------------------------- 2.54s 2025-06-02 20:01:08.741463 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.06s 2025-06-02 20:01:08.741474 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.02s 2025-06-02 20:01:08.741484 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-06-02 20:01:08.741499 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-06-02 20:01:08.741535 | orchestrator | 2025-06-02 20:01:08.741593 | orchestrator | 2025-06-02 20:01:08 | INFO  | Task ebf7e6c2-e802-457d-a879-5743e234de1c is in state SUCCESS 2025-06-02 20:01:08.741607 | orchestrator | 2025-06-02 20:01:08.741618 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:01:08.741629 | orchestrator | 2025-06-02 20:01:08.741640 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:01:08.741661 | orchestrator | Monday 02 June 2025 20:00:41 +0000 (0:00:00.290) 0:00:00.290 *********** 2025-06-02 20:01:08.741672 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:08.741683 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:08.741694 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:08.741704 | orchestrator | 2025-06-02 20:01:08.741715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:01:08.741726 | orchestrator | Monday 02 June 2025 20:00:41 +0000 (0:00:00.494) 0:00:00.784 *********** 2025-06-02 20:01:08.741737 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-02 20:01:08.741747 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-02 20:01:08.741758 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-02 20:01:08.741769 | orchestrator | 2025-06-02 20:01:08.741780 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-02 20:01:08.741790 | orchestrator | 2025-06-02 20:01:08.741801 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-02 20:01:08.741812 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.727) 0:00:01.511 *********** 2025-06-02 20:01:08.741822 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:01:08.741833 | orchestrator | 2025-06-02 20:01:08.741844 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-02 20:01:08.741855 | orchestrator | Monday 02 June 2025 20:00:43 +0000 (0:00:00.782) 0:00:02.294 *********** 2025-06-02 20:01:08.741869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.741887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.741899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.741911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.741943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.741964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.741975 | orchestrator | 2025-06-02 20:01:08.741987 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-02 20:01:08.741998 | orchestrator | Monday 02 June 2025 20:00:44 +0000 (0:00:01.427) 0:00:03.721 *********** 2025-06-02 20:01:08.742009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742220 | orchestrator | 2025-06-02 20:01:08.742231 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-02 20:01:08.742275 | orchestrator | Monday 02 June 2025 20:00:48 +0000 (0:00:03.514) 0:00:07.235 *********** 2025-06-02 20:01:08.742287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742375 | orchestrator | 2025-06-02 20:01:08.742387 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-02 20:01:08.742398 | orchestrator | Monday 02 June 2025 20:00:51 +0000 (0:00:03.604) 0:00:10.840 *********** 2025-06-02 20:01:08.742410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 20:01:08.742516 | orchestrator | 2025-06-02 20:01:08.742527 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 20:01:08.742538 | orchestrator | Monday 02 June 2025 20:00:53 +0000 (0:00:01.787) 0:00:12.628 *********** 2025-06-02 20:01:08.742550 | orchestrator | 2025-06-02 20:01:08.742561 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 20:01:08.742572 | orchestrator | Monday 02 June 2025 20:00:53 +0000 (0:00:00.055) 0:00:12.683 *********** 2025-06-02 20:01:08.742583 | orchestrator | 2025-06-02 20:01:08.742594 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 20:01:08.742605 | orchestrator | Monday 02 June 2025 20:00:53 +0000 (0:00:00.110) 0:00:12.794 *********** 2025-06-02 20:01:08.742684 | orchestrator | 2025-06-02 20:01:08.742697 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-02 20:01:08.742708 | orchestrator | Monday 02 June 2025 20:00:53 +0000 (0:00:00.074) 0:00:12.869 *********** 2025-06-02 20:01:08.742719 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:08.742730 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:08.742741 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:08.742751 | orchestrator | 2025-06-02 20:01:08.742762 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-02 20:01:08.742773 | orchestrator | Monday 02 June 2025 20:01:02 +0000 (0:00:08.398) 0:00:21.267 *********** 2025-06-02 20:01:08.742784 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:08.742795 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:08.742806 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:08.742816 | orchestrator | 2025-06-02 20:01:08.742827 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:01:08.742839 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:08.742850 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:08.742861 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:08.742872 | orchestrator | 2025-06-02 20:01:08.742883 | orchestrator | 2025-06-02 20:01:08.742894 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:01:08.742906 | orchestrator | Monday 02 June 2025 20:01:06 +0000 (0:00:04.455) 0:00:25.723 *********** 2025-06-02 20:01:08.742932 | orchestrator | =============================================================================== 2025-06-02 20:01:08.742943 | orchestrator | redis : Restart redis container ----------------------------------------- 8.40s 2025-06-02 20:01:08.742953 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.46s 2025-06-02 20:01:08.742964 | orchestrator | redis : Copying over redis config files --------------------------------- 3.60s 2025-06-02 20:01:08.742975 | orchestrator | redis : Copying over default config.json files -------------------------- 3.51s 2025-06-02 20:01:08.742986 | orchestrator | redis : Check redis containers ------------------------------------------ 1.79s 2025-06-02 20:01:08.742997 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.43s 2025-06-02 20:01:08.743008 | orchestrator | redis : include_tasks --------------------------------------------------- 0.78s 2025-06-02 20:01:08.743019 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-06-02 20:01:08.743030 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-06-02 20:01:08.743041 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2025-06-02 20:01:08.743170 | orchestrator | 2025-06-02 20:01:08 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:08.744823 | orchestrator | 2025-06-02 20:01:08 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:08.749168 | orchestrator | 2025-06-02 20:01:08 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:08.749815 | orchestrator | 2025-06-02 20:01:08 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:08.750838 | orchestrator | 2025-06-02 20:01:08 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:08.750882 | orchestrator | 2025-06-02 20:01:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:11.780172 | orchestrator | 2025-06-02 20:01:11 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:11.780489 | orchestrator | 2025-06-02 20:01:11 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:11.780970 | orchestrator | 2025-06-02 20:01:11 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:11.782196 | orchestrator | 2025-06-02 20:01:11 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:11.783665 | orchestrator | 2025-06-02 20:01:11 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:11.783717 | orchestrator | 2025-06-02 20:01:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:14.828772 | orchestrator | 2025-06-02 20:01:14 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:14.828951 | orchestrator | 2025-06-02 20:01:14 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:14.830262 | orchestrator | 2025-06-02 20:01:14 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:14.833857 | orchestrator | 2025-06-02 20:01:14 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:14.834316 | orchestrator | 2025-06-02 20:01:14 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:14.834355 | orchestrator | 2025-06-02 20:01:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:17.867477 | orchestrator | 2025-06-02 20:01:17 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:17.869525 | orchestrator | 2025-06-02 20:01:17 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:17.871123 | orchestrator | 2025-06-02 20:01:17 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:17.873279 | orchestrator | 2025-06-02 20:01:17 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:17.873936 | orchestrator | 2025-06-02 20:01:17 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:17.874176 | orchestrator | 2025-06-02 20:01:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:20.895794 | orchestrator | 2025-06-02 20:01:20 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:20.896550 | orchestrator | 2025-06-02 20:01:20 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:20.899176 | orchestrator | 2025-06-02 20:01:20 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:20.900028 | orchestrator | 2025-06-02 20:01:20 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:20.901043 | orchestrator | 2025-06-02 20:01:20 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:20.901074 | orchestrator | 2025-06-02 20:01:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:23.932495 | orchestrator | 2025-06-02 20:01:23 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:23.932572 | orchestrator | 2025-06-02 20:01:23 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:23.933169 | orchestrator | 2025-06-02 20:01:23 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:23.935174 | orchestrator | 2025-06-02 20:01:23 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:23.935613 | orchestrator | 2025-06-02 20:01:23 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:23.935776 | orchestrator | 2025-06-02 20:01:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:26.962507 | orchestrator | 2025-06-02 20:01:26 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:26.962739 | orchestrator | 2025-06-02 20:01:26 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:26.962755 | orchestrator | 2025-06-02 20:01:26 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:26.963824 | orchestrator | 2025-06-02 20:01:26 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:26.964322 | orchestrator | 2025-06-02 20:01:26 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:26.964354 | orchestrator | 2025-06-02 20:01:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:29.993652 | orchestrator | 2025-06-02 20:01:29 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:29.994445 | orchestrator | 2025-06-02 20:01:29 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:29.995735 | orchestrator | 2025-06-02 20:01:29 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:29.997174 | orchestrator | 2025-06-02 20:01:29 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:29.998385 | orchestrator | 2025-06-02 20:01:29 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:29.998423 | orchestrator | 2025-06-02 20:01:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:33.030392 | orchestrator | 2025-06-02 20:01:33 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:33.030687 | orchestrator | 2025-06-02 20:01:33 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:33.032303 | orchestrator | 2025-06-02 20:01:33 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:33.032341 | orchestrator | 2025-06-02 20:01:33 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:33.032844 | orchestrator | 2025-06-02 20:01:33 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:33.032863 | orchestrator | 2025-06-02 20:01:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:36.082954 | orchestrator | 2025-06-02 20:01:36 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:36.084126 | orchestrator | 2025-06-02 20:01:36 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:36.084168 | orchestrator | 2025-06-02 20:01:36 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:36.084968 | orchestrator | 2025-06-02 20:01:36 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:36.086011 | orchestrator | 2025-06-02 20:01:36 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:36.086099 | orchestrator | 2025-06-02 20:01:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:39.132045 | orchestrator | 2025-06-02 20:01:39 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:39.132517 | orchestrator | 2025-06-02 20:01:39 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:39.133434 | orchestrator | 2025-06-02 20:01:39 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:39.134360 | orchestrator | 2025-06-02 20:01:39 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:39.138582 | orchestrator | 2025-06-02 20:01:39 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:39.138658 | orchestrator | 2025-06-02 20:01:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:42.180850 | orchestrator | 2025-06-02 20:01:42 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:42.181354 | orchestrator | 2025-06-02 20:01:42 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:42.182374 | orchestrator | 2025-06-02 20:01:42 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:42.183410 | orchestrator | 2025-06-02 20:01:42 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:42.184328 | orchestrator | 2025-06-02 20:01:42 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:42.184527 | orchestrator | 2025-06-02 20:01:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:45.226487 | orchestrator | 2025-06-02 20:01:45 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:45.226579 | orchestrator | 2025-06-02 20:01:45 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:45.226589 | orchestrator | 2025-06-02 20:01:45 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:45.226686 | orchestrator | 2025-06-02 20:01:45 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state STARTED 2025-06-02 20:01:45.227957 | orchestrator | 2025-06-02 20:01:45 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:45.228030 | orchestrator | 2025-06-02 20:01:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:48.272990 | orchestrator | 2025-06-02 20:01:48 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:48.274856 | orchestrator | 2025-06-02 20:01:48 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:48.278823 | orchestrator | 2025-06-02 20:01:48 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:48.280433 | orchestrator | 2025-06-02 20:01:48 | INFO  | Task 674f0ba3-fa06-4785-af5e-808f8bc67f14 is in state SUCCESS 2025-06-02 20:01:48.282190 | orchestrator | 2025-06-02 20:01:48.282258 | orchestrator | 2025-06-02 20:01:48.282271 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:01:48.282284 | orchestrator | 2025-06-02 20:01:48.282295 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:01:48.282306 | orchestrator | Monday 02 June 2025 20:00:41 +0000 (0:00:00.458) 0:00:00.458 *********** 2025-06-02 20:01:48.282318 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:48.282330 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:48.282341 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:48.282352 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:01:48.282362 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:01:48.282373 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:01:48.282384 | orchestrator | 2025-06-02 20:01:48.282395 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:01:48.282406 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:01.086) 0:00:01.545 *********** 2025-06-02 20:01:48.282418 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 20:01:48.282429 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 20:01:48.282440 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 20:01:48.282450 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 20:01:48.282461 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 20:01:48.282472 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 20:01:48.282483 | orchestrator | 2025-06-02 20:01:48.282494 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-02 20:01:48.282505 | orchestrator | 2025-06-02 20:01:48.282516 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-02 20:01:48.282526 | orchestrator | Monday 02 June 2025 20:00:44 +0000 (0:00:01.336) 0:00:02.881 *********** 2025-06-02 20:01:48.282538 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:01:48.282550 | orchestrator | 2025-06-02 20:01:48.282561 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 20:01:48.282572 | orchestrator | Monday 02 June 2025 20:00:46 +0000 (0:00:02.624) 0:00:05.505 *********** 2025-06-02 20:01:48.282583 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 20:01:48.282595 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 20:01:48.282606 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 20:01:48.282617 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 20:01:48.282628 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 20:01:48.282639 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 20:01:48.282649 | orchestrator | 2025-06-02 20:01:48.282660 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 20:01:48.282699 | orchestrator | Monday 02 June 2025 20:00:48 +0000 (0:00:01.988) 0:00:07.494 *********** 2025-06-02 20:01:48.282711 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 20:01:48.282722 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 20:01:48.282732 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 20:01:48.282743 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 20:01:48.282753 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 20:01:48.282764 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 20:01:48.282775 | orchestrator | 2025-06-02 20:01:48.282786 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 20:01:48.282799 | orchestrator | Monday 02 June 2025 20:00:50 +0000 (0:00:01.788) 0:00:09.282 *********** 2025-06-02 20:01:48.282812 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-02 20:01:48.282824 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:48.282838 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-02 20:01:48.282851 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:48.282863 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-02 20:01:48.282876 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-02 20:01:48.282888 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:48.282900 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-02 20:01:48.282913 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:01:48.282926 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:01:48.282938 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-02 20:01:48.282950 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:01:48.282963 | orchestrator | 2025-06-02 20:01:48.282975 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-02 20:01:48.282987 | orchestrator | Monday 02 June 2025 20:00:52 +0000 (0:00:01.526) 0:00:10.809 *********** 2025-06-02 20:01:48.282999 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:48.283012 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:48.283024 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:48.283036 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:01:48.283048 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:01:48.283061 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:01:48.283072 | orchestrator | 2025-06-02 20:01:48.283084 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-02 20:01:48.283097 | orchestrator | Monday 02 June 2025 20:00:52 +0000 (0:00:00.739) 0:00:11.548 *********** 2025-06-02 20:01:48.283137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283357 | orchestrator | 2025-06-02 20:01:48.283369 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-02 20:01:48.283380 | orchestrator | Monday 02 June 2025 20:00:54 +0000 (0:00:01.708) 0:00:13.257 *********** 2025-06-02 20:01:48.283391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283652 | orchestrator | 2025-06-02 20:01:48.283663 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-02 20:01:48.283674 | orchestrator | Monday 02 June 2025 20:00:57 +0000 (0:00:03.099) 0:00:16.357 *********** 2025-06-02 20:01:48.283686 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:48.283697 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:48.283707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:48.283718 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:01:48.283729 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:01:48.283739 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:01:48.283750 | orchestrator | 2025-06-02 20:01:48.283761 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-02 20:01:48.283772 | orchestrator | Monday 02 June 2025 20:00:58 +0000 (0:00:01.027) 0:00:17.384 *********** 2025-06-02 20:01:48.283783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283875 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 20:01:48.283965 | orchestrator | 2025-06-02 20:01:48.283975 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 20:01:48.283987 | orchestrator | Monday 02 June 2025 20:01:02 +0000 (0:00:03.775) 0:00:21.159 *********** 2025-06-02 20:01:48.283997 | orchestrator | 2025-06-02 20:01:48.284008 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 20:01:48.284019 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.436) 0:00:21.595 *********** 2025-06-02 20:01:48.284030 | orchestrator | 2025-06-02 20:01:48.284040 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 20:01:48.284051 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.467) 0:00:22.065 *********** 2025-06-02 20:01:48.284061 | orchestrator | 2025-06-02 20:01:48.284072 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 20:01:48.284083 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.484) 0:00:22.550 *********** 2025-06-02 20:01:48.284093 | orchestrator | 2025-06-02 20:01:48.284104 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 20:01:48.284115 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:00.292) 0:00:22.842 *********** 2025-06-02 20:01:48.284125 | orchestrator | 2025-06-02 20:01:48.284136 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 20:01:48.284147 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:00.121) 0:00:22.964 *********** 2025-06-02 20:01:48.284158 | orchestrator | 2025-06-02 20:01:48.284168 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-02 20:01:48.284179 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:00.461) 0:00:23.426 *********** 2025-06-02 20:01:48.284190 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:48.284200 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:48.284211 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:01:48.284241 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:01:48.284252 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:01:48.284263 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:48.284273 | orchestrator | 2025-06-02 20:01:48.284284 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-02 20:01:48.284295 | orchestrator | Monday 02 June 2025 20:01:12 +0000 (0:00:07.533) 0:00:30.959 *********** 2025-06-02 20:01:48.284306 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:48.284317 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:48.284327 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:48.284345 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:01:48.284356 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:01:48.284366 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:01:48.284377 | orchestrator | 2025-06-02 20:01:48.284387 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 20:01:48.284398 | orchestrator | Monday 02 June 2025 20:01:13 +0000 (0:00:01.384) 0:00:32.344 *********** 2025-06-02 20:01:48.284409 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:48.284419 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:48.284430 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:48.284440 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:01:48.284451 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:01:48.284462 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:01:48.284473 | orchestrator | 2025-06-02 20:01:48.284483 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-02 20:01:48.284494 | orchestrator | Monday 02 June 2025 20:01:23 +0000 (0:00:09.408) 0:00:41.753 *********** 2025-06-02 20:01:48.284505 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-02 20:01:48.284516 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-02 20:01:48.284526 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-02 20:01:48.284542 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-02 20:01:48.284553 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-02 20:01:48.284569 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-02 20:01:48.284581 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-02 20:01:48.284591 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-02 20:01:48.284602 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-02 20:01:48.284613 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-02 20:01:48.284624 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-02 20:01:48.284634 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-02 20:01:48.284645 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 20:01:48.284656 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 20:01:48.284666 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 20:01:48.284677 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 20:01:48.284688 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 20:01:48.284698 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 20:01:48.284709 | orchestrator | 2025-06-02 20:01:48.284720 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-02 20:01:48.284731 | orchestrator | Monday 02 June 2025 20:01:31 +0000 (0:00:07.929) 0:00:49.683 *********** 2025-06-02 20:01:48.284741 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-02 20:01:48.284759 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:01:48.284769 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-02 20:01:48.284780 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:01:48.284791 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-02 20:01:48.284802 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:01:48.284813 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-02 20:01:48.284823 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-02 20:01:48.284834 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-02 20:01:48.284845 | orchestrator | 2025-06-02 20:01:48.284855 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-02 20:01:48.284866 | orchestrator | Monday 02 June 2025 20:01:33 +0000 (0:00:02.262) 0:00:51.946 *********** 2025-06-02 20:01:48.284877 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-02 20:01:48.284887 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:01:48.284898 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-02 20:01:48.284909 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:01:48.284919 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-02 20:01:48.284930 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:01:48.284940 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-02 20:01:48.284951 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-02 20:01:48.284962 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-02 20:01:48.284972 | orchestrator | 2025-06-02 20:01:48.284983 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 20:01:48.284994 | orchestrator | Monday 02 June 2025 20:01:37 +0000 (0:00:04.048) 0:00:55.995 *********** 2025-06-02 20:01:48.285004 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:48.285015 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:48.285025 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:48.285036 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:01:48.285046 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:01:48.285057 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:01:48.285067 | orchestrator | 2025-06-02 20:01:48.285078 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:01:48.285089 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:01:48.285101 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:01:48.285112 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:01:48.285123 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:01:48.285138 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:01:48.285155 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:01:48.285166 | orchestrator | 2025-06-02 20:01:48.285177 | orchestrator | 2025-06-02 20:01:48.285188 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:01:48.285199 | orchestrator | Monday 02 June 2025 20:01:45 +0000 (0:00:08.281) 0:01:04.276 *********** 2025-06-02 20:01:48.285209 | orchestrator | =============================================================================== 2025-06-02 20:01:48.285263 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.69s 2025-06-02 20:01:48.285282 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.93s 2025-06-02 20:01:48.285293 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 7.53s 2025-06-02 20:01:48.285304 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.05s 2025-06-02 20:01:48.285314 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.78s 2025-06-02 20:01:48.285325 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.10s 2025-06-02 20:01:48.285336 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.62s 2025-06-02 20:01:48.285347 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.27s 2025-06-02 20:01:48.285357 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.26s 2025-06-02 20:01:48.285368 | orchestrator | module-load : Load modules ---------------------------------------------- 1.99s 2025-06-02 20:01:48.285379 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.79s 2025-06-02 20:01:48.285389 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.71s 2025-06-02 20:01:48.285400 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.53s 2025-06-02 20:01:48.285411 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.38s 2025-06-02 20:01:48.285422 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.34s 2025-06-02 20:01:48.285432 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.09s 2025-06-02 20:01:48.285443 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.03s 2025-06-02 20:01:48.285454 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.74s 2025-06-02 20:01:48.285567 | orchestrator | 2025-06-02 20:01:48 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:48.285581 | orchestrator | 2025-06-02 20:01:48 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:01:48.285593 | orchestrator | 2025-06-02 20:01:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:51.316680 | orchestrator | 2025-06-02 20:01:51 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:51.316790 | orchestrator | 2025-06-02 20:01:51 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:51.317863 | orchestrator | 2025-06-02 20:01:51 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:51.318701 | orchestrator | 2025-06-02 20:01:51 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:51.320306 | orchestrator | 2025-06-02 20:01:51 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:01:51.320349 | orchestrator | 2025-06-02 20:01:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:54.358873 | orchestrator | 2025-06-02 20:01:54 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:54.360169 | orchestrator | 2025-06-02 20:01:54 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:54.362651 | orchestrator | 2025-06-02 20:01:54 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:54.368700 | orchestrator | 2025-06-02 20:01:54 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:54.371786 | orchestrator | 2025-06-02 20:01:54 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:01:54.371872 | orchestrator | 2025-06-02 20:01:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:57.417300 | orchestrator | 2025-06-02 20:01:57 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:01:57.417540 | orchestrator | 2025-06-02 20:01:57 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:01:57.418996 | orchestrator | 2025-06-02 20:01:57 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:01:57.420075 | orchestrator | 2025-06-02 20:01:57 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:01:57.421265 | orchestrator | 2025-06-02 20:01:57 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:01:57.421291 | orchestrator | 2025-06-02 20:01:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:00.464925 | orchestrator | 2025-06-02 20:02:00 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:00.469890 | orchestrator | 2025-06-02 20:02:00 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:00.472421 | orchestrator | 2025-06-02 20:02:00 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:00.472604 | orchestrator | 2025-06-02 20:02:00 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:00.473579 | orchestrator | 2025-06-02 20:02:00 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:00.473604 | orchestrator | 2025-06-02 20:02:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:03.518699 | orchestrator | 2025-06-02 20:02:03 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:03.520697 | orchestrator | 2025-06-02 20:02:03 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:03.521646 | orchestrator | 2025-06-02 20:02:03 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:03.524042 | orchestrator | 2025-06-02 20:02:03 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:03.525296 | orchestrator | 2025-06-02 20:02:03 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:03.525354 | orchestrator | 2025-06-02 20:02:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:06.554827 | orchestrator | 2025-06-02 20:02:06 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:06.554932 | orchestrator | 2025-06-02 20:02:06 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:06.554948 | orchestrator | 2025-06-02 20:02:06 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:06.555794 | orchestrator | 2025-06-02 20:02:06 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:06.556800 | orchestrator | 2025-06-02 20:02:06 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:06.556850 | orchestrator | 2025-06-02 20:02:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:09.604422 | orchestrator | 2025-06-02 20:02:09 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:09.605649 | orchestrator | 2025-06-02 20:02:09 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:09.608513 | orchestrator | 2025-06-02 20:02:09 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:09.609737 | orchestrator | 2025-06-02 20:02:09 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:09.610851 | orchestrator | 2025-06-02 20:02:09 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:09.610925 | orchestrator | 2025-06-02 20:02:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:12.637493 | orchestrator | 2025-06-02 20:02:12 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:12.639549 | orchestrator | 2025-06-02 20:02:12 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:12.641087 | orchestrator | 2025-06-02 20:02:12 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:12.644829 | orchestrator | 2025-06-02 20:02:12 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:12.644874 | orchestrator | 2025-06-02 20:02:12 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:12.644883 | orchestrator | 2025-06-02 20:02:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:15.684599 | orchestrator | 2025-06-02 20:02:15 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:15.684737 | orchestrator | 2025-06-02 20:02:15 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:15.684887 | orchestrator | 2025-06-02 20:02:15 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:15.686388 | orchestrator | 2025-06-02 20:02:15 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:15.687825 | orchestrator | 2025-06-02 20:02:15 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:15.687871 | orchestrator | 2025-06-02 20:02:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:18.736153 | orchestrator | 2025-06-02 20:02:18 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:18.737971 | orchestrator | 2025-06-02 20:02:18 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:18.739322 | orchestrator | 2025-06-02 20:02:18 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:18.740823 | orchestrator | 2025-06-02 20:02:18 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:18.742460 | orchestrator | 2025-06-02 20:02:18 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:18.743578 | orchestrator | 2025-06-02 20:02:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:21.787754 | orchestrator | 2025-06-02 20:02:21 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:21.788877 | orchestrator | 2025-06-02 20:02:21 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:21.791063 | orchestrator | 2025-06-02 20:02:21 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:21.793342 | orchestrator | 2025-06-02 20:02:21 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:21.795068 | orchestrator | 2025-06-02 20:02:21 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:21.795119 | orchestrator | 2025-06-02 20:02:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:24.840777 | orchestrator | 2025-06-02 20:02:24 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:24.842241 | orchestrator | 2025-06-02 20:02:24 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:24.843666 | orchestrator | 2025-06-02 20:02:24 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:24.845104 | orchestrator | 2025-06-02 20:02:24 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:24.846458 | orchestrator | 2025-06-02 20:02:24 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:24.846512 | orchestrator | 2025-06-02 20:02:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:27.891053 | orchestrator | 2025-06-02 20:02:27 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:27.893426 | orchestrator | 2025-06-02 20:02:27 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:27.896067 | orchestrator | 2025-06-02 20:02:27 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:27.898396 | orchestrator | 2025-06-02 20:02:27 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:27.900921 | orchestrator | 2025-06-02 20:02:27 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:27.900969 | orchestrator | 2025-06-02 20:02:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:30.958609 | orchestrator | 2025-06-02 20:02:30 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:30.958811 | orchestrator | 2025-06-02 20:02:30 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:30.960535 | orchestrator | 2025-06-02 20:02:30 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:30.962229 | orchestrator | 2025-06-02 20:02:30 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:30.963555 | orchestrator | 2025-06-02 20:02:30 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:30.963637 | orchestrator | 2025-06-02 20:02:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:34.017415 | orchestrator | 2025-06-02 20:02:34 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:34.019790 | orchestrator | 2025-06-02 20:02:34 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:34.023354 | orchestrator | 2025-06-02 20:02:34 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:34.026645 | orchestrator | 2025-06-02 20:02:34 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:34.028416 | orchestrator | 2025-06-02 20:02:34 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:34.028505 | orchestrator | 2025-06-02 20:02:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:37.066468 | orchestrator | 2025-06-02 20:02:37 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:37.068298 | orchestrator | 2025-06-02 20:02:37 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:37.069388 | orchestrator | 2025-06-02 20:02:37 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:37.070604 | orchestrator | 2025-06-02 20:02:37 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:37.071731 | orchestrator | 2025-06-02 20:02:37 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:37.071877 | orchestrator | 2025-06-02 20:02:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:40.111995 | orchestrator | 2025-06-02 20:02:40 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:40.113797 | orchestrator | 2025-06-02 20:02:40 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:40.115767 | orchestrator | 2025-06-02 20:02:40 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:40.117830 | orchestrator | 2025-06-02 20:02:40 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:40.119417 | orchestrator | 2025-06-02 20:02:40 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:40.119508 | orchestrator | 2025-06-02 20:02:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:43.163280 | orchestrator | 2025-06-02 20:02:43 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:43.163422 | orchestrator | 2025-06-02 20:02:43 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:43.164440 | orchestrator | 2025-06-02 20:02:43 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:43.165488 | orchestrator | 2025-06-02 20:02:43 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:43.167424 | orchestrator | 2025-06-02 20:02:43 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:43.167468 | orchestrator | 2025-06-02 20:02:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:46.210319 | orchestrator | 2025-06-02 20:02:46 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:46.211382 | orchestrator | 2025-06-02 20:02:46 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:46.212506 | orchestrator | 2025-06-02 20:02:46 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:46.213718 | orchestrator | 2025-06-02 20:02:46 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:46.215554 | orchestrator | 2025-06-02 20:02:46 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:46.215987 | orchestrator | 2025-06-02 20:02:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:49.245634 | orchestrator | 2025-06-02 20:02:49 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:49.246252 | orchestrator | 2025-06-02 20:02:49 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:49.247093 | orchestrator | 2025-06-02 20:02:49 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:49.247943 | orchestrator | 2025-06-02 20:02:49 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:49.248708 | orchestrator | 2025-06-02 20:02:49 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:49.248741 | orchestrator | 2025-06-02 20:02:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:52.276065 | orchestrator | 2025-06-02 20:02:52 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:52.276934 | orchestrator | 2025-06-02 20:02:52 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:52.277847 | orchestrator | 2025-06-02 20:02:52 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:52.279280 | orchestrator | 2025-06-02 20:02:52 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:52.279704 | orchestrator | 2025-06-02 20:02:52 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:52.279928 | orchestrator | 2025-06-02 20:02:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:55.306354 | orchestrator | 2025-06-02 20:02:55 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:55.309007 | orchestrator | 2025-06-02 20:02:55 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:55.309075 | orchestrator | 2025-06-02 20:02:55 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:55.309085 | orchestrator | 2025-06-02 20:02:55 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:55.310267 | orchestrator | 2025-06-02 20:02:55 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:55.310307 | orchestrator | 2025-06-02 20:02:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:58.343389 | orchestrator | 2025-06-02 20:02:58 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:02:58.343509 | orchestrator | 2025-06-02 20:02:58 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:02:58.343533 | orchestrator | 2025-06-02 20:02:58 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:02:58.344207 | orchestrator | 2025-06-02 20:02:58 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:02:58.347372 | orchestrator | 2025-06-02 20:02:58 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:02:58.347437 | orchestrator | 2025-06-02 20:02:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:01.381842 | orchestrator | 2025-06-02 20:03:01 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:03:01.382001 | orchestrator | 2025-06-02 20:03:01 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:01.382929 | orchestrator | 2025-06-02 20:03:01 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:03:01.383579 | orchestrator | 2025-06-02 20:03:01 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:01.384336 | orchestrator | 2025-06-02 20:03:01 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:01.384406 | orchestrator | 2025-06-02 20:03:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:04.413653 | orchestrator | 2025-06-02 20:03:04 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:03:04.416686 | orchestrator | 2025-06-02 20:03:04 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:04.417215 | orchestrator | 2025-06-02 20:03:04 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:03:04.417894 | orchestrator | 2025-06-02 20:03:04 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:04.418728 | orchestrator | 2025-06-02 20:03:04 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:04.418767 | orchestrator | 2025-06-02 20:03:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:07.453643 | orchestrator | 2025-06-02 20:03:07 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:03:07.457500 | orchestrator | 2025-06-02 20:03:07 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:07.459370 | orchestrator | 2025-06-02 20:03:07 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:03:07.459881 | orchestrator | 2025-06-02 20:03:07 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:07.460480 | orchestrator | 2025-06-02 20:03:07 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:07.460515 | orchestrator | 2025-06-02 20:03:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:10.481896 | orchestrator | 2025-06-02 20:03:10 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:03:10.483745 | orchestrator | 2025-06-02 20:03:10 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:10.485064 | orchestrator | 2025-06-02 20:03:10 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state STARTED 2025-06-02 20:03:10.486280 | orchestrator | 2025-06-02 20:03:10 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:10.487370 | orchestrator | 2025-06-02 20:03:10 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:10.488446 | orchestrator | 2025-06-02 20:03:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:13.525499 | orchestrator | 2025-06-02 20:03:13 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:03:13.526011 | orchestrator | 2025-06-02 20:03:13 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:13.527553 | orchestrator | 2025-06-02 20:03:13 | INFO  | Task 759aa102-add1-4ef9-9e7a-938bc680cb67 is in state SUCCESS 2025-06-02 20:03:13.529173 | orchestrator | 2025-06-02 20:03:13.529202 | orchestrator | 2025-06-02 20:03:13.529208 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-02 20:03:13.529214 | orchestrator | 2025-06-02 20:03:13.529219 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-02 20:03:13.529225 | orchestrator | Monday 02 June 2025 19:58:11 +0000 (0:00:00.174) 0:00:00.175 *********** 2025-06-02 20:03:13.529230 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:03:13.529236 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:03:13.529240 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:03:13.529245 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.529250 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.529254 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.529259 | orchestrator | 2025-06-02 20:03:13.529264 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-02 20:03:13.529269 | orchestrator | Monday 02 June 2025 19:58:12 +0000 (0:00:00.641) 0:00:00.816 *********** 2025-06-02 20:03:13.529274 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.529280 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.529284 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.529289 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.529294 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.529298 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.529303 | orchestrator | 2025-06-02 20:03:13.529307 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-02 20:03:13.529312 | orchestrator | Monday 02 June 2025 19:58:13 +0000 (0:00:00.637) 0:00:01.454 *********** 2025-06-02 20:03:13.529316 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.529321 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.529325 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.529330 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.529335 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.529339 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.529344 | orchestrator | 2025-06-02 20:03:13.529348 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-02 20:03:13.529353 | orchestrator | Monday 02 June 2025 19:58:13 +0000 (0:00:00.785) 0:00:02.239 *********** 2025-06-02 20:03:13.529358 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:03:13.529363 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:03:13.529385 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:03:13.529390 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.529394 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.529399 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.529404 | orchestrator | 2025-06-02 20:03:13.529408 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-02 20:03:13.529413 | orchestrator | Monday 02 June 2025 19:58:15 +0000 (0:00:01.988) 0:00:04.228 *********** 2025-06-02 20:03:13.529417 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:03:13.529422 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:03:13.529426 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:03:13.529452 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.529456 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.529461 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.529466 | orchestrator | 2025-06-02 20:03:13.529470 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-02 20:03:13.529475 | orchestrator | Monday 02 June 2025 19:58:17 +0000 (0:00:01.074) 0:00:05.303 *********** 2025-06-02 20:03:13.529479 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:03:13.529484 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:03:13.529488 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:03:13.529493 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.529497 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.529502 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.529506 | orchestrator | 2025-06-02 20:03:13.529511 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-02 20:03:13.529516 | orchestrator | Monday 02 June 2025 19:58:18 +0000 (0:00:00.975) 0:00:06.278 *********** 2025-06-02 20:03:13.529520 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.529525 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.529529 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.529534 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.529538 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.529543 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.529547 | orchestrator | 2025-06-02 20:03:13.529552 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-02 20:03:13.529557 | orchestrator | Monday 02 June 2025 19:58:18 +0000 (0:00:00.792) 0:00:07.071 *********** 2025-06-02 20:03:13.529561 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.529566 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.529570 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.529585 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.529590 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.529594 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.529599 | orchestrator | 2025-06-02 20:03:13.529603 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-02 20:03:13.529608 | orchestrator | Monday 02 June 2025 19:58:19 +0000 (0:00:00.737) 0:00:07.808 *********** 2025-06-02 20:03:13.529613 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:03:13.529617 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:03:13.529622 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.529626 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:03:13.529631 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:03:13.529635 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.529640 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:03:13.529644 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:03:13.529649 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.529654 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:03:13.529670 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:03:13.529675 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.529680 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:03:13.529684 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:03:13.529689 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.529696 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:03:13.529704 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:03:13.529711 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.529718 | orchestrator | 2025-06-02 20:03:13.529726 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-02 20:03:13.529733 | orchestrator | Monday 02 June 2025 19:58:20 +0000 (0:00:01.065) 0:00:08.874 *********** 2025-06-02 20:03:13.529740 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.529747 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.529754 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.529761 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.529769 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.529777 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.529784 | orchestrator | 2025-06-02 20:03:13.529793 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-02 20:03:13.529803 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:01.696) 0:00:10.570 *********** 2025-06-02 20:03:13.529810 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:03:13.529816 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:03:13.529821 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:03:13.529826 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.529831 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.529837 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.529842 | orchestrator | 2025-06-02 20:03:13.529848 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-02 20:03:13.529853 | orchestrator | Monday 02 June 2025 19:58:23 +0000 (0:00:00.986) 0:00:11.556 *********** 2025-06-02 20:03:13.529859 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:03:13.529864 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:03:13.529870 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:03:13.529875 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.529880 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.529885 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.529891 | orchestrator | 2025-06-02 20:03:13.529896 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-02 20:03:13.529901 | orchestrator | Monday 02 June 2025 19:58:29 +0000 (0:00:05.798) 0:00:17.354 *********** 2025-06-02 20:03:13.529907 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.529912 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.529917 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.529923 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.529928 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.529934 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.529939 | orchestrator | 2025-06-02 20:03:13.529944 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-02 20:03:13.529950 | orchestrator | Monday 02 June 2025 19:58:30 +0000 (0:00:01.565) 0:00:18.920 *********** 2025-06-02 20:03:13.529955 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.529960 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.529966 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.529971 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.529976 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.529982 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.529987 | orchestrator | 2025-06-02 20:03:13.529997 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-02 20:03:13.530004 | orchestrator | Monday 02 June 2025 19:58:32 +0000 (0:00:02.254) 0:00:21.175 *********** 2025-06-02 20:03:13.530009 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.530041 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.530046 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.530052 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.530057 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.530062 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.530067 | orchestrator | 2025-06-02 20:03:13.530073 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-02 20:03:13.530078 | orchestrator | Monday 02 June 2025 19:58:33 +0000 (0:00:00.575) 0:00:21.751 *********** 2025-06-02 20:03:13.530084 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-02 20:03:13.530094 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-02 20:03:13.530099 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.530105 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-02 20:03:13.530110 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-02 20:03:13.530116 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.530122 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-02 20:03:13.530127 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-02 20:03:13.530132 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.530179 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-02 20:03:13.530185 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-02 20:03:13.530189 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.530194 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-02 20:03:13.530199 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-02 20:03:13.530203 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.530208 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-02 20:03:13.530212 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-02 20:03:13.530217 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.530222 | orchestrator | 2025-06-02 20:03:13.530226 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-02 20:03:13.530236 | orchestrator | Monday 02 June 2025 19:58:34 +0000 (0:00:00.971) 0:00:22.722 *********** 2025-06-02 20:03:13.530241 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.530245 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.530250 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.530254 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.530259 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.530264 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.530268 | orchestrator | 2025-06-02 20:03:13.530273 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-02 20:03:13.530277 | orchestrator | 2025-06-02 20:03:13.530282 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-02 20:03:13.530287 | orchestrator | Monday 02 June 2025 19:58:36 +0000 (0:00:01.859) 0:00:24.582 *********** 2025-06-02 20:03:13.530291 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.530296 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.530300 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.530305 | orchestrator | 2025-06-02 20:03:13.530309 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-02 20:03:13.530314 | orchestrator | Monday 02 June 2025 19:58:37 +0000 (0:00:01.008) 0:00:25.590 *********** 2025-06-02 20:03:13.530318 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.530323 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.530328 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.530332 | orchestrator | 2025-06-02 20:03:13.530337 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-02 20:03:13.530346 | orchestrator | Monday 02 June 2025 19:58:38 +0000 (0:00:01.373) 0:00:26.964 *********** 2025-06-02 20:03:13.530350 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.530355 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.530359 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.530364 | orchestrator | 2025-06-02 20:03:13.530369 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-02 20:03:13.530373 | orchestrator | Monday 02 June 2025 19:58:39 +0000 (0:00:01.018) 0:00:27.982 *********** 2025-06-02 20:03:13.530378 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.530382 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.530387 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.530391 | orchestrator | 2025-06-02 20:03:13.530396 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-02 20:03:13.530400 | orchestrator | Monday 02 June 2025 19:58:40 +0000 (0:00:00.888) 0:00:28.871 *********** 2025-06-02 20:03:13.530405 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.530410 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.530414 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.530419 | orchestrator | 2025-06-02 20:03:13.530423 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-02 20:03:13.530428 | orchestrator | Monday 02 June 2025 19:58:41 +0000 (0:00:00.449) 0:00:29.320 *********** 2025-06-02 20:03:13.530432 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:03:13.530437 | orchestrator | 2025-06-02 20:03:13.530442 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-02 20:03:13.530446 | orchestrator | Monday 02 June 2025 19:58:41 +0000 (0:00:00.659) 0:00:29.980 *********** 2025-06-02 20:03:13.530451 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.530455 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.530460 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.530464 | orchestrator | 2025-06-02 20:03:13.530469 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-02 20:03:13.530474 | orchestrator | Monday 02 June 2025 19:58:43 +0000 (0:00:01.892) 0:00:31.872 *********** 2025-06-02 20:03:13.530478 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.530483 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.530487 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.530492 | orchestrator | 2025-06-02 20:03:13.530496 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-02 20:03:13.530501 | orchestrator | Monday 02 June 2025 19:58:44 +0000 (0:00:00.843) 0:00:32.716 *********** 2025-06-02 20:03:13.530506 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.530510 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.530515 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.530519 | orchestrator | 2025-06-02 20:03:13.530524 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-02 20:03:13.530528 | orchestrator | Monday 02 June 2025 19:58:45 +0000 (0:00:01.052) 0:00:33.768 *********** 2025-06-02 20:03:13.530533 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.530537 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.530542 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.530546 | orchestrator | 2025-06-02 20:03:13.530554 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-02 20:03:13.530558 | orchestrator | Monday 02 June 2025 19:58:48 +0000 (0:00:02.514) 0:00:36.283 *********** 2025-06-02 20:03:13.530563 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.530567 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.530572 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.530576 | orchestrator | 2025-06-02 20:03:13.530581 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-02 20:03:13.530586 | orchestrator | Monday 02 June 2025 19:58:48 +0000 (0:00:00.445) 0:00:36.729 *********** 2025-06-02 20:03:13.530594 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.530598 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.530603 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.530607 | orchestrator | 2025-06-02 20:03:13.530612 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-02 20:03:13.530616 | orchestrator | Monday 02 June 2025 19:58:48 +0000 (0:00:00.509) 0:00:37.238 *********** 2025-06-02 20:03:13.530621 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.530625 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.530630 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.530634 | orchestrator | 2025-06-02 20:03:13.530639 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-02 20:03:13.530644 | orchestrator | Monday 02 June 2025 19:58:51 +0000 (0:00:02.050) 0:00:39.289 *********** 2025-06-02 20:03:13.530651 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 20:03:13.530656 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 20:03:13.530661 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 20:03:13.530665 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 20:03:13.530670 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 20:03:13.530675 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 20:03:13.530679 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 20:03:13.530684 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 20:03:13.530688 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 20:03:13.530693 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 20:03:13.530698 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 20:03:13.530702 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 20:03:13.530707 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 20:03:13.530711 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 20:03:13.530716 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 20:03:13.530720 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.530725 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.530730 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.530734 | orchestrator | 2025-06-02 20:03:13.530739 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-02 20:03:13.530743 | orchestrator | Monday 02 June 2025 19:59:47 +0000 (0:00:56.661) 0:01:35.951 *********** 2025-06-02 20:03:13.530748 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.530753 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.530761 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.530765 | orchestrator | 2025-06-02 20:03:13.530770 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-02 20:03:13.530775 | orchestrator | Monday 02 June 2025 19:59:48 +0000 (0:00:00.383) 0:01:36.334 *********** 2025-06-02 20:03:13.530779 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.530784 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.530788 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.530793 | orchestrator | 2025-06-02 20:03:13.530801 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-02 20:03:13.530809 | orchestrator | Monday 02 June 2025 19:59:49 +0000 (0:00:01.454) 0:01:37.789 *********** 2025-06-02 20:03:13.530816 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.530823 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.530831 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.530838 | orchestrator | 2025-06-02 20:03:13.530846 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-02 20:03:13.530853 | orchestrator | Monday 02 June 2025 19:59:50 +0000 (0:00:01.342) 0:01:39.131 *********** 2025-06-02 20:03:13.530861 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.530869 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.530895 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.530903 | orchestrator | 2025-06-02 20:03:13.530910 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-02 20:03:13.530915 | orchestrator | Monday 02 June 2025 20:00:08 +0000 (0:00:17.674) 0:01:56.806 *********** 2025-06-02 20:03:13.530919 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.530924 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.530928 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.530933 | orchestrator | 2025-06-02 20:03:13.530937 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-02 20:03:13.530942 | orchestrator | Monday 02 June 2025 20:00:09 +0000 (0:00:00.667) 0:01:57.474 *********** 2025-06-02 20:03:13.530947 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.530951 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.530956 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.530960 | orchestrator | 2025-06-02 20:03:13.530965 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-02 20:03:13.530969 | orchestrator | Monday 02 June 2025 20:00:09 +0000 (0:00:00.696) 0:01:58.171 *********** 2025-06-02 20:03:13.530974 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.530979 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.530983 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.530988 | orchestrator | 2025-06-02 20:03:13.530996 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-02 20:03:13.531001 | orchestrator | Monday 02 June 2025 20:00:10 +0000 (0:00:00.727) 0:01:58.898 *********** 2025-06-02 20:03:13.531005 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.531010 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.531014 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.531019 | orchestrator | 2025-06-02 20:03:13.531024 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-02 20:03:13.531028 | orchestrator | Monday 02 June 2025 20:00:11 +0000 (0:00:00.939) 0:01:59.838 *********** 2025-06-02 20:03:13.531033 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.531037 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.531042 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.531046 | orchestrator | 2025-06-02 20:03:13.531066 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-02 20:03:13.531071 | orchestrator | Monday 02 June 2025 20:00:11 +0000 (0:00:00.288) 0:02:00.126 *********** 2025-06-02 20:03:13.531075 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.531080 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.531493 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.531507 | orchestrator | 2025-06-02 20:03:13.531518 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-02 20:03:13.531524 | orchestrator | Monday 02 June 2025 20:00:12 +0000 (0:00:00.633) 0:02:00.759 *********** 2025-06-02 20:03:13.531528 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.531533 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.531537 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.531542 | orchestrator | 2025-06-02 20:03:13.531546 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-02 20:03:13.531551 | orchestrator | Monday 02 June 2025 20:00:13 +0000 (0:00:00.598) 0:02:01.357 *********** 2025-06-02 20:03:13.531555 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.531560 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.531564 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.531569 | orchestrator | 2025-06-02 20:03:13.531573 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-02 20:03:13.531578 | orchestrator | Monday 02 June 2025 20:00:14 +0000 (0:00:01.079) 0:02:02.437 *********** 2025-06-02 20:03:13.531582 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:13.531587 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:13.531591 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:13.531596 | orchestrator | 2025-06-02 20:03:13.531600 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-02 20:03:13.531605 | orchestrator | Monday 02 June 2025 20:00:15 +0000 (0:00:00.882) 0:02:03.320 *********** 2025-06-02 20:03:13.531609 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.531614 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.531618 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.531623 | orchestrator | 2025-06-02 20:03:13.531627 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-02 20:03:13.531632 | orchestrator | Monday 02 June 2025 20:00:15 +0000 (0:00:00.311) 0:02:03.632 *********** 2025-06-02 20:03:13.531637 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.531641 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.531646 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.531650 | orchestrator | 2025-06-02 20:03:13.531655 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-02 20:03:13.531659 | orchestrator | Monday 02 June 2025 20:00:15 +0000 (0:00:00.299) 0:02:03.931 *********** 2025-06-02 20:03:13.531664 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.531669 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.531673 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.531678 | orchestrator | 2025-06-02 20:03:13.531683 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-02 20:03:13.531687 | orchestrator | Monday 02 June 2025 20:00:16 +0000 (0:00:00.886) 0:02:04.818 *********** 2025-06-02 20:03:13.531692 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.531696 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.531705 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.531709 | orchestrator | 2025-06-02 20:03:13.531714 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-02 20:03:13.531719 | orchestrator | Monday 02 June 2025 20:00:17 +0000 (0:00:00.643) 0:02:05.461 *********** 2025-06-02 20:03:13.531724 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 20:03:13.531728 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 20:03:13.531733 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 20:03:13.531737 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 20:03:13.531742 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 20:03:13.531747 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 20:03:13.531755 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 20:03:13.531759 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 20:03:13.531764 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 20:03:13.531768 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-02 20:03:13.531773 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 20:03:13.531777 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 20:03:13.531788 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-02 20:03:13.531793 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 20:03:13.531798 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 20:03:13.531802 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 20:03:13.531807 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 20:03:13.531811 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 20:03:13.531816 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 20:03:13.531820 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 20:03:13.531825 | orchestrator | 2025-06-02 20:03:13.531829 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-02 20:03:13.531834 | orchestrator | 2025-06-02 20:03:13.531839 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-02 20:03:13.531844 | orchestrator | Monday 02 June 2025 20:00:20 +0000 (0:00:03.141) 0:02:08.603 *********** 2025-06-02 20:03:13.531848 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:03:13.531853 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:03:13.531857 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:03:13.531862 | orchestrator | 2025-06-02 20:03:13.531866 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-02 20:03:13.531871 | orchestrator | Monday 02 June 2025 20:00:20 +0000 (0:00:00.499) 0:02:09.102 *********** 2025-06-02 20:03:13.531875 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:03:13.531880 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:03:13.531887 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:03:13.531895 | orchestrator | 2025-06-02 20:03:13.531918 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-02 20:03:13.531926 | orchestrator | Monday 02 June 2025 20:00:21 +0000 (0:00:00.588) 0:02:09.691 *********** 2025-06-02 20:03:13.531933 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:03:13.531940 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:03:13.531947 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:03:13.531954 | orchestrator | 2025-06-02 20:03:13.531961 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-02 20:03:13.531968 | orchestrator | Monday 02 June 2025 20:00:21 +0000 (0:00:00.291) 0:02:09.983 *********** 2025-06-02 20:03:13.531975 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:03:13.531983 | orchestrator | 2025-06-02 20:03:13.531990 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-02 20:03:13.531997 | orchestrator | Monday 02 June 2025 20:00:22 +0000 (0:00:00.615) 0:02:10.598 *********** 2025-06-02 20:03:13.532006 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.532011 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.532023 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.532028 | orchestrator | 2025-06-02 20:03:13.532032 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-02 20:03:13.532037 | orchestrator | Monday 02 June 2025 20:00:22 +0000 (0:00:00.282) 0:02:10.880 *********** 2025-06-02 20:03:13.532041 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.532046 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.532050 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.532055 | orchestrator | 2025-06-02 20:03:13.532060 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-02 20:03:13.532064 | orchestrator | Monday 02 June 2025 20:00:22 +0000 (0:00:00.298) 0:02:11.179 *********** 2025-06-02 20:03:13.532074 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.532079 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.532083 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.532088 | orchestrator | 2025-06-02 20:03:13.532093 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-02 20:03:13.532097 | orchestrator | Monday 02 June 2025 20:00:23 +0000 (0:00:00.266) 0:02:11.446 *********** 2025-06-02 20:03:13.532102 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:03:13.532106 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:03:13.532111 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:03:13.532115 | orchestrator | 2025-06-02 20:03:13.532120 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-02 20:03:13.532125 | orchestrator | Monday 02 June 2025 20:00:24 +0000 (0:00:01.615) 0:02:13.062 *********** 2025-06-02 20:03:13.532129 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:03:13.532134 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:03:13.532157 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:03:13.532162 | orchestrator | 2025-06-02 20:03:13.532167 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 20:03:13.532171 | orchestrator | 2025-06-02 20:03:13.532176 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 20:03:13.532181 | orchestrator | Monday 02 June 2025 20:00:33 +0000 (0:00:08.309) 0:02:21.371 *********** 2025-06-02 20:03:13.532185 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:13.532190 | orchestrator | 2025-06-02 20:03:13.532194 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 20:03:13.532199 | orchestrator | Monday 02 June 2025 20:00:33 +0000 (0:00:00.857) 0:02:22.228 *********** 2025-06-02 20:03:13.532204 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:13.532208 | orchestrator | 2025-06-02 20:03:13.532213 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 20:03:13.532218 | orchestrator | Monday 02 June 2025 20:00:34 +0000 (0:00:00.499) 0:02:22.728 *********** 2025-06-02 20:03:13.532222 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 20:03:13.532227 | orchestrator | 2025-06-02 20:03:13.532237 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 20:03:13.532242 | orchestrator | Monday 02 June 2025 20:00:35 +0000 (0:00:01.013) 0:02:23.742 *********** 2025-06-02 20:03:13.532247 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:13.532252 | orchestrator | 2025-06-02 20:03:13.532256 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 20:03:13.532261 | orchestrator | Monday 02 June 2025 20:00:36 +0000 (0:00:00.874) 0:02:24.616 *********** 2025-06-02 20:03:13.532266 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:13.532270 | orchestrator | 2025-06-02 20:03:13.532275 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 20:03:13.532279 | orchestrator | Monday 02 June 2025 20:00:36 +0000 (0:00:00.570) 0:02:25.186 *********** 2025-06-02 20:03:13.532284 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 20:03:13.532289 | orchestrator | 2025-06-02 20:03:13.532293 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 20:03:13.532298 | orchestrator | Monday 02 June 2025 20:00:38 +0000 (0:00:01.443) 0:02:26.630 *********** 2025-06-02 20:03:13.532307 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 20:03:13.532312 | orchestrator | 2025-06-02 20:03:13.532317 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 20:03:13.532321 | orchestrator | Monday 02 June 2025 20:00:39 +0000 (0:00:00.809) 0:02:27.440 *********** 2025-06-02 20:03:13.532326 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:13.532331 | orchestrator | 2025-06-02 20:03:13.532335 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 20:03:13.532340 | orchestrator | Monday 02 June 2025 20:00:39 +0000 (0:00:00.400) 0:02:27.840 *********** 2025-06-02 20:03:13.532344 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:13.532349 | orchestrator | 2025-06-02 20:03:13.532353 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-02 20:03:13.532358 | orchestrator | 2025-06-02 20:03:13.532362 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-02 20:03:13.532367 | orchestrator | Monday 02 June 2025 20:00:40 +0000 (0:00:00.448) 0:02:28.289 *********** 2025-06-02 20:03:13.532371 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:13.532376 | orchestrator | 2025-06-02 20:03:13.532380 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-02 20:03:13.532385 | orchestrator | Monday 02 June 2025 20:00:40 +0000 (0:00:00.123) 0:02:28.413 *********** 2025-06-02 20:03:13.532390 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 20:03:13.532394 | orchestrator | 2025-06-02 20:03:13.532399 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-02 20:03:13.532403 | orchestrator | Monday 02 June 2025 20:00:40 +0000 (0:00:00.356) 0:02:28.770 *********** 2025-06-02 20:03:13.532408 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:13.532412 | orchestrator | 2025-06-02 20:03:13.532417 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-02 20:03:13.532421 | orchestrator | Monday 02 June 2025 20:00:41 +0000 (0:00:00.640) 0:02:29.410 *********** 2025-06-02 20:03:13.532426 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:13.532430 | orchestrator | 2025-06-02 20:03:13.532435 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-02 20:03:13.532439 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:01.166) 0:02:30.577 *********** 2025-06-02 20:03:13.532444 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:13.532448 | orchestrator | 2025-06-02 20:03:13.532453 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-02 20:03:13.532457 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.647) 0:02:31.224 *********** 2025-06-02 20:03:13.532462 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:13.532467 | orchestrator | 2025-06-02 20:03:13.532471 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-02 20:03:13.532476 | orchestrator | Monday 02 June 2025 20:00:43 +0000 (0:00:00.423) 0:02:31.648 *********** 2025-06-02 20:03:13.532483 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:13.532488 | orchestrator | 2025-06-02 20:03:13.532492 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-02 20:03:13.532497 | orchestrator | Monday 02 June 2025 20:00:49 +0000 (0:00:05.760) 0:02:37.408 *********** 2025-06-02 20:03:13.532501 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:13.532506 | orchestrator | 2025-06-02 20:03:13.532510 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-02 20:03:13.532515 | orchestrator | Monday 02 June 2025 20:00:59 +0000 (0:00:10.339) 0:02:47.748 *********** 2025-06-02 20:03:13.532519 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:13.532524 | orchestrator | 2025-06-02 20:03:13.532529 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-02 20:03:13.532533 | orchestrator | 2025-06-02 20:03:13.532538 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-02 20:03:13.532546 | orchestrator | Monday 02 June 2025 20:00:59 +0000 (0:00:00.493) 0:02:48.241 *********** 2025-06-02 20:03:13.532550 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.532555 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.532559 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.532564 | orchestrator | 2025-06-02 20:03:13.532568 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-02 20:03:13.532573 | orchestrator | Monday 02 June 2025 20:01:00 +0000 (0:00:00.390) 0:02:48.632 *********** 2025-06-02 20:03:13.532578 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.532582 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.532587 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.532591 | orchestrator | 2025-06-02 20:03:13.532596 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-02 20:03:13.532600 | orchestrator | Monday 02 June 2025 20:01:00 +0000 (0:00:00.265) 0:02:48.898 *********** 2025-06-02 20:03:13.532605 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:03:13.532609 | orchestrator | 2025-06-02 20:03:13.532614 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-02 20:03:13.532622 | orchestrator | Monday 02 June 2025 20:01:01 +0000 (0:00:00.381) 0:02:49.280 *********** 2025-06-02 20:03:13.532627 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 20:03:13.532631 | orchestrator | 2025-06-02 20:03:13.532636 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-02 20:03:13.532641 | orchestrator | Monday 02 June 2025 20:01:02 +0000 (0:00:01.091) 0:02:50.372 *********** 2025-06-02 20:03:13.532645 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:03:13.532650 | orchestrator | 2025-06-02 20:03:13.532654 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-02 20:03:13.532659 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.961) 0:02:51.334 *********** 2025-06-02 20:03:13.532663 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.532668 | orchestrator | 2025-06-02 20:03:13.532673 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-02 20:03:13.532677 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.172) 0:02:51.506 *********** 2025-06-02 20:03:13.532682 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:03:13.532686 | orchestrator | 2025-06-02 20:03:13.532691 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-02 20:03:13.532695 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:01.061) 0:02:52.568 *********** 2025-06-02 20:03:13.532700 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.532704 | orchestrator | 2025-06-02 20:03:13.532709 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-02 20:03:13.532714 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:00.213) 0:02:52.781 *********** 2025-06-02 20:03:13.532718 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.532723 | orchestrator | 2025-06-02 20:03:13.532727 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-02 20:03:13.532732 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:00.158) 0:02:52.940 *********** 2025-06-02 20:03:13.532737 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.532741 | orchestrator | 2025-06-02 20:03:13.532746 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-02 20:03:13.532750 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:00.158) 0:02:53.099 *********** 2025-06-02 20:03:13.532755 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.532760 | orchestrator | 2025-06-02 20:03:13.532764 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-02 20:03:13.532769 | orchestrator | Monday 02 June 2025 20:01:05 +0000 (0:00:00.202) 0:02:53.301 *********** 2025-06-02 20:03:13.532773 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 20:03:13.532778 | orchestrator | 2025-06-02 20:03:13.532786 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-02 20:03:13.532790 | orchestrator | Monday 02 June 2025 20:01:09 +0000 (0:00:04.765) 0:02:58.066 *********** 2025-06-02 20:03:13.532795 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-02 20:03:13.532799 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-02 20:03:13.532804 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (29 retries left). 2025-06-02 20:03:13.532809 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-02 20:03:13.532814 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-02 20:03:13.532818 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-02 20:03:13.532823 | orchestrator | 2025-06-02 20:03:13.532827 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-02 20:03:13.532832 | orchestrator | Monday 02 June 2025 20:02:44 +0000 (0:01:34.672) 0:04:32.739 *********** 2025-06-02 20:03:13.532836 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:03:13.532841 | orchestrator | 2025-06-02 20:03:13.532848 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-02 20:03:13.532853 | orchestrator | Monday 02 June 2025 20:02:45 +0000 (0:00:01.301) 0:04:34.040 *********** 2025-06-02 20:03:13.532857 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 20:03:13.532862 | orchestrator | 2025-06-02 20:03:13.532867 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-02 20:03:13.532871 | orchestrator | Monday 02 June 2025 20:02:47 +0000 (0:00:01.811) 0:04:35.852 *********** 2025-06-02 20:03:13.532876 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 20:03:13.532880 | orchestrator | 2025-06-02 20:03:13.532885 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-02 20:03:13.532889 | orchestrator | Monday 02 June 2025 20:02:49 +0000 (0:00:01.796) 0:04:37.648 *********** 2025-06-02 20:03:13.532897 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.532904 | orchestrator | 2025-06-02 20:03:13.532911 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-02 20:03:13.532921 | orchestrator | Monday 02 June 2025 20:02:49 +0000 (0:00:00.291) 0:04:37.940 *********** 2025-06-02 20:03:13.532933 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-02 20:03:13.532943 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-02 20:03:13.532949 | orchestrator | 2025-06-02 20:03:13.532955 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-02 20:03:13.532962 | orchestrator | Monday 02 June 2025 20:02:51 +0000 (0:00:01.886) 0:04:39.827 *********** 2025-06-02 20:03:13.532968 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.532975 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.532981 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.532988 | orchestrator | 2025-06-02 20:03:13.532995 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-02 20:03:13.533002 | orchestrator | Monday 02 June 2025 20:02:51 +0000 (0:00:00.280) 0:04:40.107 *********** 2025-06-02 20:03:13.533014 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.533021 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.533028 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.533035 | orchestrator | 2025-06-02 20:03:13.533041 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-02 20:03:13.533047 | orchestrator | 2025-06-02 20:03:13.533053 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-02 20:03:13.533060 | orchestrator | Monday 02 June 2025 20:02:52 +0000 (0:00:00.798) 0:04:40.905 *********** 2025-06-02 20:03:13.533066 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:13.533073 | orchestrator | 2025-06-02 20:03:13.533088 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-02 20:03:13.533095 | orchestrator | Monday 02 June 2025 20:02:52 +0000 (0:00:00.237) 0:04:41.143 *********** 2025-06-02 20:03:13.533102 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 20:03:13.533110 | orchestrator | 2025-06-02 20:03:13.533117 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-02 20:03:13.533124 | orchestrator | Monday 02 June 2025 20:02:53 +0000 (0:00:00.193) 0:04:41.337 *********** 2025-06-02 20:03:13.533132 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:13.533163 | orchestrator | 2025-06-02 20:03:13.533170 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-02 20:03:13.533177 | orchestrator | 2025-06-02 20:03:13.533184 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-02 20:03:13.533191 | orchestrator | Monday 02 June 2025 20:02:58 +0000 (0:00:05.105) 0:04:46.442 *********** 2025-06-02 20:03:13.533198 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:03:13.533205 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:03:13.533212 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:03:13.533220 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:13.533227 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:13.533235 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:13.533242 | orchestrator | 2025-06-02 20:03:13.533250 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-02 20:03:13.533257 | orchestrator | Monday 02 June 2025 20:02:58 +0000 (0:00:00.471) 0:04:46.913 *********** 2025-06-02 20:03:13.533262 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 20:03:13.533266 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 20:03:13.533271 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 20:03:13.533275 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 20:03:13.533280 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 20:03:13.533284 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 20:03:13.533289 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 20:03:13.533293 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 20:03:13.533298 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 20:03:13.533302 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 20:03:13.533307 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 20:03:13.533311 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 20:03:13.533316 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 20:03:13.533325 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 20:03:13.533329 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 20:03:13.533334 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 20:03:13.533338 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 20:03:13.533343 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 20:03:13.533347 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 20:03:13.533352 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 20:03:13.533361 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 20:03:13.533366 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 20:03:13.533370 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 20:03:13.533375 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 20:03:13.533379 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 20:03:13.533384 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 20:03:13.533388 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 20:03:13.533393 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 20:03:13.533398 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 20:03:13.533407 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 20:03:13.533412 | orchestrator | 2025-06-02 20:03:13.533416 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-02 20:03:13.533421 | orchestrator | Monday 02 June 2025 20:03:09 +0000 (0:00:10.808) 0:04:57.722 *********** 2025-06-02 20:03:13.533425 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.533430 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.533435 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.533439 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.533444 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.533448 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.533453 | orchestrator | 2025-06-02 20:03:13.533457 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-02 20:03:13.533462 | orchestrator | Monday 02 June 2025 20:03:09 +0000 (0:00:00.397) 0:04:58.119 *********** 2025-06-02 20:03:13.533466 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:03:13.533471 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:03:13.533476 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:03:13.533480 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:13.533485 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:13.533489 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:13.533494 | orchestrator | 2025-06-02 20:03:13.533498 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:03:13.533503 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:03:13.533509 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-02 20:03:13.533515 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 20:03:13.533519 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 20:03:13.533524 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 20:03:13.533529 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 20:03:13.533533 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 20:03:13.533538 | orchestrator | 2025-06-02 20:03:13.533542 | orchestrator | 2025-06-02 20:03:13.533547 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:03:13.533555 | orchestrator | Monday 02 June 2025 20:03:10 +0000 (0:00:00.439) 0:04:58.558 *********** 2025-06-02 20:03:13.533560 | orchestrator | =============================================================================== 2025-06-02 20:03:13.533564 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 94.67s 2025-06-02 20:03:13.533569 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.66s 2025-06-02 20:03:13.533573 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 17.67s 2025-06-02 20:03:13.533578 | orchestrator | Manage labels ---------------------------------------------------------- 10.81s 2025-06-02 20:03:13.533585 | orchestrator | kubectl : Install required packages ------------------------------------ 10.34s 2025-06-02 20:03:13.533590 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.31s 2025-06-02 20:03:13.533594 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.80s 2025-06-02 20:03:13.533599 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.76s 2025-06-02 20:03:13.533603 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.11s 2025-06-02 20:03:13.533608 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.77s 2025-06-02 20:03:13.533612 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.14s 2025-06-02 20:03:13.533617 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.51s 2025-06-02 20:03:13.533622 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.25s 2025-06-02 20:03:13.533626 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.05s 2025-06-02 20:03:13.533630 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.99s 2025-06-02 20:03:13.533635 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.89s 2025-06-02 20:03:13.533639 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.89s 2025-06-02 20:03:13.533644 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.86s 2025-06-02 20:03:13.533649 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.81s 2025-06-02 20:03:13.533653 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 1.80s 2025-06-02 20:03:13.533762 | orchestrator | 2025-06-02 20:03:13 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:13.533770 | orchestrator | 2025-06-02 20:03:13 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:13.535684 | orchestrator | 2025-06-02 20:03:13 | INFO  | Task 1d85cf7a-2cc4-4707-9cd3-905f86236c89 is in state STARTED 2025-06-02 20:03:13.539982 | orchestrator | 2025-06-02 20:03:13 | INFO  | Task 1bd3ca7e-5f21-4b60-a670-9b4807e84e29 is in state STARTED 2025-06-02 20:03:13.540002 | orchestrator | 2025-06-02 20:03:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:16.612406 | orchestrator | 2025-06-02 20:03:16 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:03:16.612499 | orchestrator | 2025-06-02 20:03:16 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:16.616814 | orchestrator | 2025-06-02 20:03:16 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:16.616843 | orchestrator | 2025-06-02 20:03:16 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:16.616852 | orchestrator | 2025-06-02 20:03:16 | INFO  | Task 1d85cf7a-2cc4-4707-9cd3-905f86236c89 is in state SUCCESS 2025-06-02 20:03:16.617581 | orchestrator | 2025-06-02 20:03:16 | INFO  | Task 1bd3ca7e-5f21-4b60-a670-9b4807e84e29 is in state STARTED 2025-06-02 20:03:16.617623 | orchestrator | 2025-06-02 20:03:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:19.659334 | orchestrator | 2025-06-02 20:03:19 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:03:19.662075 | orchestrator | 2025-06-02 20:03:19 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:19.668766 | orchestrator | 2025-06-02 20:03:19 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:19.670458 | orchestrator | 2025-06-02 20:03:19 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:19.672975 | orchestrator | 2025-06-02 20:03:19 | INFO  | Task 1bd3ca7e-5f21-4b60-a670-9b4807e84e29 is in state STARTED 2025-06-02 20:03:19.673576 | orchestrator | 2025-06-02 20:03:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:22.723476 | orchestrator | 2025-06-02 20:03:22 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:03:22.724689 | orchestrator | 2025-06-02 20:03:22 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:22.731184 | orchestrator | 2025-06-02 20:03:22 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:22.732415 | orchestrator | 2025-06-02 20:03:22 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:22.732908 | orchestrator | 2025-06-02 20:03:22 | INFO  | Task 1bd3ca7e-5f21-4b60-a670-9b4807e84e29 is in state SUCCESS 2025-06-02 20:03:22.733387 | orchestrator | 2025-06-02 20:03:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:25.765079 | orchestrator | 2025-06-02 20:03:25 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state STARTED 2025-06-02 20:03:25.765823 | orchestrator | 2025-06-02 20:03:25 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:25.766790 | orchestrator | 2025-06-02 20:03:25 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:25.767875 | orchestrator | 2025-06-02 20:03:25 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:25.768160 | orchestrator | 2025-06-02 20:03:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:28.812598 | orchestrator | 2025-06-02 20:03:28 | INFO  | Task a9c98c76-fc78-4203-82a5-3568207b626f is in state SUCCESS 2025-06-02 20:03:28.815049 | orchestrator | 2025-06-02 20:03:28.815156 | orchestrator | 2025-06-02 20:03:28.815179 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-02 20:03:28.815193 | orchestrator | 2025-06-02 20:03:28.815205 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 20:03:28.815216 | orchestrator | Monday 02 June 2025 20:03:13 +0000 (0:00:00.198) 0:00:00.198 *********** 2025-06-02 20:03:28.815228 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 20:03:28.815239 | orchestrator | 2025-06-02 20:03:28.815251 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 20:03:28.815262 | orchestrator | Monday 02 June 2025 20:03:14 +0000 (0:00:00.799) 0:00:00.997 *********** 2025-06-02 20:03:28.815273 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:28.815285 | orchestrator | 2025-06-02 20:03:28.815296 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-02 20:03:28.815307 | orchestrator | Monday 02 June 2025 20:03:15 +0000 (0:00:01.156) 0:00:02.153 *********** 2025-06-02 20:03:28.815319 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:28.815330 | orchestrator | 2025-06-02 20:03:28.815341 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:03:28.815352 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:03:28.815394 | orchestrator | 2025-06-02 20:03:28.815406 | orchestrator | 2025-06-02 20:03:28.815417 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:03:28.815429 | orchestrator | Monday 02 June 2025 20:03:16 +0000 (0:00:00.418) 0:00:02.571 *********** 2025-06-02 20:03:28.815439 | orchestrator | =============================================================================== 2025-06-02 20:03:28.815450 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.16s 2025-06-02 20:03:28.815461 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2025-06-02 20:03:28.815472 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2025-06-02 20:03:28.815483 | orchestrator | 2025-06-02 20:03:28.815494 | orchestrator | 2025-06-02 20:03:28.815505 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 20:03:28.815516 | orchestrator | 2025-06-02 20:03:28.815527 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 20:03:28.815538 | orchestrator | Monday 02 June 2025 20:03:14 +0000 (0:00:00.191) 0:00:00.191 *********** 2025-06-02 20:03:28.815549 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:28.815561 | orchestrator | 2025-06-02 20:03:28.815572 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 20:03:28.815584 | orchestrator | Monday 02 June 2025 20:03:15 +0000 (0:00:00.629) 0:00:00.820 *********** 2025-06-02 20:03:28.815597 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:28.815609 | orchestrator | 2025-06-02 20:03:28.815622 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 20:03:28.815634 | orchestrator | Monday 02 June 2025 20:03:15 +0000 (0:00:00.530) 0:00:01.351 *********** 2025-06-02 20:03:28.815648 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 20:03:28.815660 | orchestrator | 2025-06-02 20:03:28.815674 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 20:03:28.815687 | orchestrator | Monday 02 June 2025 20:03:16 +0000 (0:00:00.693) 0:00:02.044 *********** 2025-06-02 20:03:28.815700 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:28.815713 | orchestrator | 2025-06-02 20:03:28.815727 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 20:03:28.815740 | orchestrator | Monday 02 June 2025 20:03:17 +0000 (0:00:01.240) 0:00:03.285 *********** 2025-06-02 20:03:28.815753 | orchestrator | changed: [testbed-manager] 2025-06-02 20:03:28.815765 | orchestrator | 2025-06-02 20:03:28.815778 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 20:03:28.815790 | orchestrator | Monday 02 June 2025 20:03:18 +0000 (0:00:00.893) 0:00:04.179 *********** 2025-06-02 20:03:28.815803 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 20:03:28.815818 | orchestrator | 2025-06-02 20:03:28.815831 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 20:03:28.815843 | orchestrator | Monday 02 June 2025 20:03:20 +0000 (0:00:01.517) 0:00:05.696 *********** 2025-06-02 20:03:28.815856 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 20:03:28.815869 | orchestrator | 2025-06-02 20:03:28.815882 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 20:03:28.815894 | orchestrator | Monday 02 June 2025 20:03:20 +0000 (0:00:00.847) 0:00:06.544 *********** 2025-06-02 20:03:28.815908 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:28.815920 | orchestrator | 2025-06-02 20:03:28.815933 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 20:03:28.815946 | orchestrator | Monday 02 June 2025 20:03:21 +0000 (0:00:00.366) 0:00:06.910 *********** 2025-06-02 20:03:28.815959 | orchestrator | ok: [testbed-manager] 2025-06-02 20:03:28.815970 | orchestrator | 2025-06-02 20:03:28.815995 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:03:28.816007 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:03:28.816027 | orchestrator | 2025-06-02 20:03:28.816038 | orchestrator | 2025-06-02 20:03:28.816049 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:03:28.816060 | orchestrator | Monday 02 June 2025 20:03:21 +0000 (0:00:00.264) 0:00:07.174 *********** 2025-06-02 20:03:28.816071 | orchestrator | =============================================================================== 2025-06-02 20:03:28.816081 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2025-06-02 20:03:28.816092 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.24s 2025-06-02 20:03:28.816103 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.89s 2025-06-02 20:03:28.816177 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.85s 2025-06-02 20:03:28.816202 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-06-02 20:03:28.816221 | orchestrator | Get home directory of operator user ------------------------------------- 0.63s 2025-06-02 20:03:28.816240 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2025-06-02 20:03:28.816251 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2025-06-02 20:03:28.816262 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2025-06-02 20:03:28.816273 | orchestrator | 2025-06-02 20:03:28.816284 | orchestrator | 2025-06-02 20:03:28.816296 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-02 20:03:28.816306 | orchestrator | 2025-06-02 20:03:28.816317 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 20:03:28.816328 | orchestrator | Monday 02 June 2025 20:01:06 +0000 (0:00:00.168) 0:00:00.168 *********** 2025-06-02 20:03:28.816340 | orchestrator | ok: [localhost] => { 2025-06-02 20:03:28.816352 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-02 20:03:28.816363 | orchestrator | } 2025-06-02 20:03:28.816375 | orchestrator | 2025-06-02 20:03:28.816386 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-02 20:03:28.816397 | orchestrator | Monday 02 June 2025 20:01:06 +0000 (0:00:00.060) 0:00:00.228 *********** 2025-06-02 20:03:28.816408 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-02 20:03:28.816422 | orchestrator | ...ignoring 2025-06-02 20:03:28.816433 | orchestrator | 2025-06-02 20:03:28.816445 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-02 20:03:28.816455 | orchestrator | Monday 02 June 2025 20:01:10 +0000 (0:00:03.458) 0:00:03.687 *********** 2025-06-02 20:03:28.816466 | orchestrator | skipping: [localhost] 2025-06-02 20:03:28.816477 | orchestrator | 2025-06-02 20:03:28.816488 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-02 20:03:28.816499 | orchestrator | Monday 02 June 2025 20:01:10 +0000 (0:00:00.041) 0:00:03.729 *********** 2025-06-02 20:03:28.816510 | orchestrator | ok: [localhost] 2025-06-02 20:03:28.816520 | orchestrator | 2025-06-02 20:03:28.816531 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:03:28.816542 | orchestrator | 2025-06-02 20:03:28.816553 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:03:28.816564 | orchestrator | Monday 02 June 2025 20:01:10 +0000 (0:00:00.125) 0:00:03.854 *********** 2025-06-02 20:03:28.816575 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:28.816586 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:28.816597 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:28.816608 | orchestrator | 2025-06-02 20:03:28.816619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:03:28.816629 | orchestrator | Monday 02 June 2025 20:01:10 +0000 (0:00:00.322) 0:00:04.177 *********** 2025-06-02 20:03:28.816711 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-02 20:03:28.816725 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-02 20:03:28.816736 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-02 20:03:28.816747 | orchestrator | 2025-06-02 20:03:28.816758 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-02 20:03:28.816769 | orchestrator | 2025-06-02 20:03:28.816780 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 20:03:28.816791 | orchestrator | Monday 02 June 2025 20:01:10 +0000 (0:00:00.469) 0:00:04.646 *********** 2025-06-02 20:03:28.816802 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:03:28.816813 | orchestrator | 2025-06-02 20:03:28.816824 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 20:03:28.816835 | orchestrator | Monday 02 June 2025 20:01:11 +0000 (0:00:00.467) 0:00:05.114 *********** 2025-06-02 20:03:28.816845 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:28.816856 | orchestrator | 2025-06-02 20:03:28.816867 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-02 20:03:28.816878 | orchestrator | Monday 02 June 2025 20:01:12 +0000 (0:00:00.952) 0:00:06.067 *********** 2025-06-02 20:03:28.816889 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:28.816900 | orchestrator | 2025-06-02 20:03:28.816911 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-02 20:03:28.816922 | orchestrator | Monday 02 June 2025 20:01:12 +0000 (0:00:00.549) 0:00:06.617 *********** 2025-06-02 20:03:28.816933 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:28.816944 | orchestrator | 2025-06-02 20:03:28.816955 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-02 20:03:28.816973 | orchestrator | Monday 02 June 2025 20:01:13 +0000 (0:00:00.401) 0:00:07.019 *********** 2025-06-02 20:03:28.816985 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:28.816996 | orchestrator | 2025-06-02 20:03:28.817007 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-02 20:03:28.817018 | orchestrator | Monday 02 June 2025 20:01:13 +0000 (0:00:00.363) 0:00:07.382 *********** 2025-06-02 20:03:28.817029 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:28.817040 | orchestrator | 2025-06-02 20:03:28.817051 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 20:03:28.817062 | orchestrator | Monday 02 June 2025 20:01:15 +0000 (0:00:01.348) 0:00:08.731 *********** 2025-06-02 20:03:28.817072 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:03:28.817087 | orchestrator | 2025-06-02 20:03:28.817105 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 20:03:28.817175 | orchestrator | Monday 02 June 2025 20:01:17 +0000 (0:00:02.513) 0:00:11.244 *********** 2025-06-02 20:03:28.817195 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:28.817213 | orchestrator | 2025-06-02 20:03:28.817230 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-02 20:03:28.817247 | orchestrator | Monday 02 June 2025 20:01:18 +0000 (0:00:00.801) 0:00:12.046 *********** 2025-06-02 20:03:28.817264 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:28.817283 | orchestrator | 2025-06-02 20:03:28.817300 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-02 20:03:28.817318 | orchestrator | Monday 02 June 2025 20:01:18 +0000 (0:00:00.285) 0:00:12.332 *********** 2025-06-02 20:03:28.817338 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:28.817357 | orchestrator | 2025-06-02 20:03:28.817380 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-02 20:03:28.817392 | orchestrator | Monday 02 June 2025 20:01:19 +0000 (0:00:00.336) 0:00:12.669 *********** 2025-06-02 20:03:28.817407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:03:28.817435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:03:28.817457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:03:28.817470 | orchestrator | 2025-06-02 20:03:28.817481 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-02 20:03:28.817493 | orchestrator | Monday 02 June 2025 20:01:19 +0000 (0:00:00.973) 0:00:13.642 *********** 2025-06-02 20:03:28.817516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:03:28.817536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:03:28.817549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:03:28.817560 | orchestrator | 2025-06-02 20:03:28.817571 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-02 20:03:28.817582 | orchestrator | Monday 02 June 2025 20:01:21 +0000 (0:00:01.742) 0:00:15.384 *********** 2025-06-02 20:03:28.817594 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 20:03:28.817605 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 20:03:28.817621 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 20:03:28.817632 | orchestrator | 2025-06-02 20:03:28.817643 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-02 20:03:28.817654 | orchestrator | Monday 02 June 2025 20:01:23 +0000 (0:00:01.562) 0:00:16.946 *********** 2025-06-02 20:03:28.817664 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 20:03:28.817675 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 20:03:28.817687 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 20:03:28.817698 | orchestrator | 2025-06-02 20:03:28.817709 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-02 20:03:28.817725 | orchestrator | Monday 02 June 2025 20:01:25 +0000 (0:00:02.260) 0:00:19.207 *********** 2025-06-02 20:03:28.817743 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 20:03:28.817754 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 20:03:28.817765 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 20:03:28.817776 | orchestrator | 2025-06-02 20:03:28.817787 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-02 20:03:28.817798 | orchestrator | Monday 02 June 2025 20:01:27 +0000 (0:00:01.641) 0:00:20.849 *********** 2025-06-02 20:03:28.817809 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 20:03:28.817820 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 20:03:28.817831 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 20:03:28.817842 | orchestrator | 2025-06-02 20:03:28.817852 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-02 20:03:28.817864 | orchestrator | Monday 02 June 2025 20:01:29 +0000 (0:00:01.907) 0:00:22.757 *********** 2025-06-02 20:03:28.817875 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 20:03:28.817886 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 20:03:28.817897 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 20:03:28.817908 | orchestrator | 2025-06-02 20:03:28.817919 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-02 20:03:28.817929 | orchestrator | Monday 02 June 2025 20:01:30 +0000 (0:00:01.440) 0:00:24.197 *********** 2025-06-02 20:03:28.817940 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 20:03:28.817951 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 20:03:28.817962 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 20:03:28.817973 | orchestrator | 2025-06-02 20:03:28.817984 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 20:03:28.817995 | orchestrator | Monday 02 June 2025 20:01:32 +0000 (0:00:01.509) 0:00:25.707 *********** 2025-06-02 20:03:28.818006 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:28.818067 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:28.818082 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:28.818094 | orchestrator | 2025-06-02 20:03:28.818105 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-02 20:03:28.818162 | orchestrator | Monday 02 June 2025 20:01:32 +0000 (0:00:00.706) 0:00:26.413 *********** 2025-06-02 20:03:28.818177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:03:28.818205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:03:28.818226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:03:28.818239 | orchestrator | 2025-06-02 20:03:28.818250 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-02 20:03:28.818262 | orchestrator | Monday 02 June 2025 20:01:34 +0000 (0:00:02.005) 0:00:28.419 *********** 2025-06-02 20:03:28.818273 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:28.818284 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:28.818295 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:28.818306 | orchestrator | 2025-06-02 20:03:28.818318 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-02 20:03:28.818337 | orchestrator | Monday 02 June 2025 20:01:35 +0000 (0:00:01.107) 0:00:29.526 *********** 2025-06-02 20:03:28.818356 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:28.818376 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:28.818395 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:28.818414 | orchestrator | 2025-06-02 20:03:28.818434 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-02 20:03:28.818452 | orchestrator | Monday 02 June 2025 20:01:43 +0000 (0:00:07.680) 0:00:37.207 *********** 2025-06-02 20:03:28.818470 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:28.818488 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:28.818508 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:28.818527 | orchestrator | 2025-06-02 20:03:28.818548 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 20:03:28.818568 | orchestrator | 2025-06-02 20:03:28.818587 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 20:03:28.818606 | orchestrator | Monday 02 June 2025 20:01:43 +0000 (0:00:00.309) 0:00:37.517 *********** 2025-06-02 20:03:28.818618 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:28.818629 | orchestrator | 2025-06-02 20:03:28.818709 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 20:03:28.818729 | orchestrator | Monday 02 June 2025 20:01:44 +0000 (0:00:00.699) 0:00:38.220 *********** 2025-06-02 20:03:28.818762 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:03:28.818781 | orchestrator | 2025-06-02 20:03:28.818802 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 20:03:28.818821 | orchestrator | Monday 02 June 2025 20:01:44 +0000 (0:00:00.271) 0:00:38.491 *********** 2025-06-02 20:03:28.818841 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:28.818862 | orchestrator | 2025-06-02 20:03:28.818882 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 20:03:28.818898 | orchestrator | Monday 02 June 2025 20:01:46 +0000 (0:00:01.733) 0:00:40.225 *********** 2025-06-02 20:03:28.818910 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:03:28.818922 | orchestrator | 2025-06-02 20:03:28.818934 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 20:03:28.818944 | orchestrator | 2025-06-02 20:03:28.818955 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 20:03:28.818966 | orchestrator | Monday 02 June 2025 20:02:42 +0000 (0:00:56.285) 0:01:36.510 *********** 2025-06-02 20:03:28.818977 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:28.818988 | orchestrator | 2025-06-02 20:03:28.818999 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 20:03:28.819010 | orchestrator | Monday 02 June 2025 20:02:43 +0000 (0:00:00.756) 0:01:37.267 *********** 2025-06-02 20:03:28.819021 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:03:28.819032 | orchestrator | 2025-06-02 20:03:28.819043 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 20:03:28.819054 | orchestrator | Monday 02 June 2025 20:02:44 +0000 (0:00:00.626) 0:01:37.893 *********** 2025-06-02 20:03:28.819065 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:28.819076 | orchestrator | 2025-06-02 20:03:28.819087 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 20:03:28.819098 | orchestrator | Monday 02 June 2025 20:02:46 +0000 (0:00:02.362) 0:01:40.256 *********** 2025-06-02 20:03:28.819109 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:03:28.819147 | orchestrator | 2025-06-02 20:03:28.819161 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 20:03:28.819172 | orchestrator | 2025-06-02 20:03:28.819184 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 20:03:28.819195 | orchestrator | Monday 02 June 2025 20:03:03 +0000 (0:00:16.979) 0:01:57.235 *********** 2025-06-02 20:03:28.819206 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:28.819217 | orchestrator | 2025-06-02 20:03:28.819239 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 20:03:28.819251 | orchestrator | Monday 02 June 2025 20:03:04 +0000 (0:00:00.744) 0:01:57.979 *********** 2025-06-02 20:03:28.819332 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:03:28.819345 | orchestrator | 2025-06-02 20:03:28.819356 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 20:03:28.819368 | orchestrator | Monday 02 June 2025 20:03:04 +0000 (0:00:00.487) 0:01:58.467 *********** 2025-06-02 20:03:28.819379 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:28.819390 | orchestrator | 2025-06-02 20:03:28.819401 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 20:03:28.819412 | orchestrator | Monday 02 June 2025 20:03:06 +0000 (0:00:02.003) 0:02:00.470 *********** 2025-06-02 20:03:28.819423 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:03:28.819434 | orchestrator | 2025-06-02 20:03:28.819445 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-02 20:03:28.819456 | orchestrator | 2025-06-02 20:03:28.819467 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-02 20:03:28.819478 | orchestrator | Monday 02 June 2025 20:03:22 +0000 (0:00:15.474) 0:02:15.945 *********** 2025-06-02 20:03:28.819489 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:03:28.819517 | orchestrator | 2025-06-02 20:03:28.819529 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-02 20:03:28.819540 | orchestrator | Monday 02 June 2025 20:03:22 +0000 (0:00:00.509) 0:02:16.454 *********** 2025-06-02 20:03:28.819551 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 20:03:28.819562 | orchestrator | enable_outward_rabbitmq_True 2025-06-02 20:03:28.819573 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 20:03:28.819584 | orchestrator | outward_rabbitmq_restart 2025-06-02 20:03:28.819595 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:03:28.819606 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:03:28.819618 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:03:28.819629 | orchestrator | 2025-06-02 20:03:28.819640 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-02 20:03:28.819651 | orchestrator | skipping: no hosts matched 2025-06-02 20:03:28.819662 | orchestrator | 2025-06-02 20:03:28.819673 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-02 20:03:28.819684 | orchestrator | skipping: no hosts matched 2025-06-02 20:03:28.819695 | orchestrator | 2025-06-02 20:03:28.819706 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-02 20:03:28.819717 | orchestrator | skipping: no hosts matched 2025-06-02 20:03:28.819728 | orchestrator | 2025-06-02 20:03:28.819739 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:03:28.819829 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 20:03:28.819856 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 20:03:28.819867 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:03:28.819878 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:03:28.819889 | orchestrator | 2025-06-02 20:03:28.819900 | orchestrator | 2025-06-02 20:03:28.819911 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:03:28.819922 | orchestrator | Monday 02 June 2025 20:03:25 +0000 (0:00:02.814) 0:02:19.269 *********** 2025-06-02 20:03:28.819933 | orchestrator | =============================================================================== 2025-06-02 20:03:28.819961 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 88.74s 2025-06-02 20:03:28.819973 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.68s 2025-06-02 20:03:28.819995 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.10s 2025-06-02 20:03:28.820006 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.46s 2025-06-02 20:03:28.820017 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.82s 2025-06-02 20:03:28.820028 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.51s 2025-06-02 20:03:28.820039 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.26s 2025-06-02 20:03:28.820050 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.20s 2025-06-02 20:03:28.820067 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.01s 2025-06-02 20:03:28.820078 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.91s 2025-06-02 20:03:28.820089 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.74s 2025-06-02 20:03:28.820100 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.64s 2025-06-02 20:03:28.820111 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.56s 2025-06-02 20:03:28.820166 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.51s 2025-06-02 20:03:28.820186 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.44s 2025-06-02 20:03:28.820280 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.39s 2025-06-02 20:03:28.820293 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.35s 2025-06-02 20:03:28.820317 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.11s 2025-06-02 20:03:28.820328 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.97s 2025-06-02 20:03:28.820339 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.95s 2025-06-02 20:03:28.820351 | orchestrator | 2025-06-02 20:03:28 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:28.820362 | orchestrator | 2025-06-02 20:03:28 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:28.820489 | orchestrator | 2025-06-02 20:03:28 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:28.820505 | orchestrator | 2025-06-02 20:03:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:31.860270 | orchestrator | 2025-06-02 20:03:31 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:31.862098 | orchestrator | 2025-06-02 20:03:31 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:31.864496 | orchestrator | 2025-06-02 20:03:31 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:31.864553 | orchestrator | 2025-06-02 20:03:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:34.923691 | orchestrator | 2025-06-02 20:03:34 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:34.923767 | orchestrator | 2025-06-02 20:03:34 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:34.924627 | orchestrator | 2025-06-02 20:03:34 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:34.924661 | orchestrator | 2025-06-02 20:03:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:37.971658 | orchestrator | 2025-06-02 20:03:37 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:37.971754 | orchestrator | 2025-06-02 20:03:37 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:37.973476 | orchestrator | 2025-06-02 20:03:37 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:37.973772 | orchestrator | 2025-06-02 20:03:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:41.013255 | orchestrator | 2025-06-02 20:03:41 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:41.016032 | orchestrator | 2025-06-02 20:03:41 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:41.018384 | orchestrator | 2025-06-02 20:03:41 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:41.019214 | orchestrator | 2025-06-02 20:03:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:44.063237 | orchestrator | 2025-06-02 20:03:44 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:44.063831 | orchestrator | 2025-06-02 20:03:44 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:44.066803 | orchestrator | 2025-06-02 20:03:44 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:44.066869 | orchestrator | 2025-06-02 20:03:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:47.119902 | orchestrator | 2025-06-02 20:03:47 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:47.121763 | orchestrator | 2025-06-02 20:03:47 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:47.124614 | orchestrator | 2025-06-02 20:03:47 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:47.124669 | orchestrator | 2025-06-02 20:03:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:50.169953 | orchestrator | 2025-06-02 20:03:50 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:50.170660 | orchestrator | 2025-06-02 20:03:50 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:50.171666 | orchestrator | 2025-06-02 20:03:50 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:50.171701 | orchestrator | 2025-06-02 20:03:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:53.211182 | orchestrator | 2025-06-02 20:03:53 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:53.211273 | orchestrator | 2025-06-02 20:03:53 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:53.212760 | orchestrator | 2025-06-02 20:03:53 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:53.212805 | orchestrator | 2025-06-02 20:03:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:56.258312 | orchestrator | 2025-06-02 20:03:56 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:56.259047 | orchestrator | 2025-06-02 20:03:56 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:56.259181 | orchestrator | 2025-06-02 20:03:56 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:56.259306 | orchestrator | 2025-06-02 20:03:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:59.311601 | orchestrator | 2025-06-02 20:03:59 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:03:59.313311 | orchestrator | 2025-06-02 20:03:59 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:03:59.315476 | orchestrator | 2025-06-02 20:03:59 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:03:59.315745 | orchestrator | 2025-06-02 20:03:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:02.348927 | orchestrator | 2025-06-02 20:04:02 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:02.350173 | orchestrator | 2025-06-02 20:04:02 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:02.351327 | orchestrator | 2025-06-02 20:04:02 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:04:02.351355 | orchestrator | 2025-06-02 20:04:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:05.396711 | orchestrator | 2025-06-02 20:04:05 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:05.400779 | orchestrator | 2025-06-02 20:04:05 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:05.400868 | orchestrator | 2025-06-02 20:04:05 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:04:05.400930 | orchestrator | 2025-06-02 20:04:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:08.451018 | orchestrator | 2025-06-02 20:04:08 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:08.453001 | orchestrator | 2025-06-02 20:04:08 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:08.455881 | orchestrator | 2025-06-02 20:04:08 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:04:08.456505 | orchestrator | 2025-06-02 20:04:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:11.502768 | orchestrator | 2025-06-02 20:04:11 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:11.508958 | orchestrator | 2025-06-02 20:04:11 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:11.509259 | orchestrator | 2025-06-02 20:04:11 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:04:11.509287 | orchestrator | 2025-06-02 20:04:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:14.553414 | orchestrator | 2025-06-02 20:04:14 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:14.555020 | orchestrator | 2025-06-02 20:04:14 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:14.555087 | orchestrator | 2025-06-02 20:04:14 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:04:14.555106 | orchestrator | 2025-06-02 20:04:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:17.597588 | orchestrator | 2025-06-02 20:04:17 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:17.597848 | orchestrator | 2025-06-02 20:04:17 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:17.598801 | orchestrator | 2025-06-02 20:04:17 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:04:17.598851 | orchestrator | 2025-06-02 20:04:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:20.635334 | orchestrator | 2025-06-02 20:04:20 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:20.636519 | orchestrator | 2025-06-02 20:04:20 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:20.638715 | orchestrator | 2025-06-02 20:04:20 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:04:20.638790 | orchestrator | 2025-06-02 20:04:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:23.683830 | orchestrator | 2025-06-02 20:04:23 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:23.685461 | orchestrator | 2025-06-02 20:04:23 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:23.685862 | orchestrator | 2025-06-02 20:04:23 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:04:23.685896 | orchestrator | 2025-06-02 20:04:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:26.734373 | orchestrator | 2025-06-02 20:04:26 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:26.738199 | orchestrator | 2025-06-02 20:04:26 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:26.738372 | orchestrator | 2025-06-02 20:04:26 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state STARTED 2025-06-02 20:04:26.738937 | orchestrator | 2025-06-02 20:04:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:29.780681 | orchestrator | 2025-06-02 20:04:29 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:29.788821 | orchestrator | 2025-06-02 20:04:29.788924 | orchestrator | 2025-06-02 20:04:29.788939 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:04:29.788951 | orchestrator | 2025-06-02 20:04:29.788975 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:04:29.788988 | orchestrator | Monday 02 June 2025 20:01:50 +0000 (0:00:00.376) 0:00:00.376 *********** 2025-06-02 20:04:29.788999 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.789011 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.789022 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.789051 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:04:29.789062 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:04:29.789072 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:04:29.789140 | orchestrator | 2025-06-02 20:04:29.789151 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:04:29.789162 | orchestrator | Monday 02 June 2025 20:01:51 +0000 (0:00:00.958) 0:00:01.334 *********** 2025-06-02 20:04:29.789173 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-02 20:04:29.789184 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-02 20:04:29.789222 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-02 20:04:29.789234 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-02 20:04:29.789245 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-02 20:04:29.789256 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-02 20:04:29.789279 | orchestrator | 2025-06-02 20:04:29.789290 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-02 20:04:29.789301 | orchestrator | 2025-06-02 20:04:29.789311 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-02 20:04:29.789322 | orchestrator | Monday 02 June 2025 20:01:52 +0000 (0:00:00.811) 0:00:02.146 *********** 2025-06-02 20:04:29.789334 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:04:29.789345 | orchestrator | 2025-06-02 20:04:29.789356 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-02 20:04:29.789367 | orchestrator | Monday 02 June 2025 20:01:53 +0000 (0:00:01.095) 0:00:03.242 *********** 2025-06-02 20:04:29.789434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789555 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789566 | orchestrator | 2025-06-02 20:04:29.789605 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-02 20:04:29.789617 | orchestrator | Monday 02 June 2025 20:01:54 +0000 (0:00:01.349) 0:00:04.592 *********** 2025-06-02 20:04:29.789628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789760 | orchestrator | 2025-06-02 20:04:29.789781 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-02 20:04:29.789818 | orchestrator | Monday 02 June 2025 20:01:56 +0000 (0:00:01.722) 0:00:06.314 *********** 2025-06-02 20:04:29.789830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.789921 | orchestrator | 2025-06-02 20:04:29.789932 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-02 20:04:29.789943 | orchestrator | Monday 02 June 2025 20:01:57 +0000 (0:00:01.131) 0:00:07.446 *********** 2025-06-02 20:04:29.789954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790233 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790397 | orchestrator | 2025-06-02 20:04:29.790431 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-02 20:04:29.790475 | orchestrator | Monday 02 June 2025 20:01:59 +0000 (0:00:01.378) 0:00:08.825 *********** 2025-06-02 20:04:29.790495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.790766 | orchestrator | 2025-06-02 20:04:29.790785 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-02 20:04:29.790803 | orchestrator | Monday 02 June 2025 20:02:00 +0000 (0:00:01.669) 0:00:10.495 *********** 2025-06-02 20:04:29.790820 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:29.790837 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:29.790854 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:29.790996 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:04:29.791015 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:04:29.791072 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:04:29.791091 | orchestrator | 2025-06-02 20:04:29.791107 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-02 20:04:29.791124 | orchestrator | Monday 02 June 2025 20:02:03 +0000 (0:00:02.704) 0:00:13.199 *********** 2025-06-02 20:04:29.791140 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-02 20:04:29.791274 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-02 20:04:29.791288 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-02 20:04:29.791298 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-02 20:04:29.791307 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-02 20:04:29.791328 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-02 20:04:29.791338 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:04:29.791348 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:04:29.791370 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:04:29.791380 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:04:29.791390 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:04:29.791400 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:04:29.791410 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:04:29.791509 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:04:29.791530 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:04:29.791574 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:04:29.791612 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:04:29.791629 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:04:29.791659 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:04:29.791677 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:04:29.791693 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:04:29.791710 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:04:29.791830 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:04:29.791847 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:04:29.791891 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:04:29.791910 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:04:29.791949 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:04:29.791974 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:04:29.791990 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:04:29.792007 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:04:29.792023 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:04:29.792169 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:04:29.792186 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:04:29.792224 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:04:29.792241 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:04:29.792258 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:04:29.792274 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 20:04:29.792290 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 20:04:29.792305 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 20:04:29.792321 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 20:04:29.792337 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 20:04:29.792496 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 20:04:29.792517 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-02 20:04:29.792535 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-02 20:04:29.792586 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-02 20:04:29.792603 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-02 20:04:29.792619 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-02 20:04:29.792649 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-02 20:04:29.792769 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 20:04:29.792787 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 20:04:29.792804 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 20:04:29.792821 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 20:04:29.792837 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 20:04:29.792876 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 20:04:29.792891 | orchestrator | 2025-06-02 20:04:29.792908 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:04:29.792925 | orchestrator | Monday 02 June 2025 20:02:23 +0000 (0:00:20.024) 0:00:33.223 *********** 2025-06-02 20:04:29.792942 | orchestrator | 2025-06-02 20:04:29.792960 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:04:29.792977 | orchestrator | Monday 02 June 2025 20:02:23 +0000 (0:00:00.076) 0:00:33.300 *********** 2025-06-02 20:04:29.793087 | orchestrator | 2025-06-02 20:04:29.793106 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:04:29.793123 | orchestrator | Monday 02 June 2025 20:02:23 +0000 (0:00:00.077) 0:00:33.377 *********** 2025-06-02 20:04:29.793140 | orchestrator | 2025-06-02 20:04:29.793157 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:04:29.793173 | orchestrator | Monday 02 June 2025 20:02:23 +0000 (0:00:00.076) 0:00:33.453 *********** 2025-06-02 20:04:29.793306 | orchestrator | 2025-06-02 20:04:29.793325 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:04:29.793343 | orchestrator | Monday 02 June 2025 20:02:23 +0000 (0:00:00.070) 0:00:33.523 *********** 2025-06-02 20:04:29.793385 | orchestrator | 2025-06-02 20:04:29.793403 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:04:29.793428 | orchestrator | Monday 02 June 2025 20:02:23 +0000 (0:00:00.084) 0:00:33.608 *********** 2025-06-02 20:04:29.793445 | orchestrator | 2025-06-02 20:04:29.793462 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-02 20:04:29.793595 | orchestrator | Monday 02 June 2025 20:02:23 +0000 (0:00:00.065) 0:00:33.673 *********** 2025-06-02 20:04:29.793612 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.793630 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:04:29.793677 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.793696 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.793712 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:04:29.793726 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:04:29.793740 | orchestrator | 2025-06-02 20:04:29.793754 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-02 20:04:29.794205 | orchestrator | Monday 02 June 2025 20:02:25 +0000 (0:00:01.691) 0:00:35.364 *********** 2025-06-02 20:04:29.794258 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:29.794275 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:29.794292 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:29.794308 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:04:29.794325 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:04:29.794342 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:04:29.794413 | orchestrator | 2025-06-02 20:04:29.794424 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-02 20:04:29.794447 | orchestrator | 2025-06-02 20:04:29.794566 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 20:04:29.794581 | orchestrator | Monday 02 June 2025 20:03:02 +0000 (0:00:37.168) 0:01:12.533 *********** 2025-06-02 20:04:29.794591 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:29.794601 | orchestrator | 2025-06-02 20:04:29.794654 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 20:04:29.794665 | orchestrator | Monday 02 June 2025 20:03:03 +0000 (0:00:00.776) 0:01:13.310 *********** 2025-06-02 20:04:29.794675 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:29.794697 | orchestrator | 2025-06-02 20:04:29.794715 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-02 20:04:29.794731 | orchestrator | Monday 02 June 2025 20:03:04 +0000 (0:00:01.058) 0:01:14.368 *********** 2025-06-02 20:04:29.794741 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.794751 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.794761 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.794771 | orchestrator | 2025-06-02 20:04:29.794780 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-02 20:04:29.794864 | orchestrator | Monday 02 June 2025 20:03:05 +0000 (0:00:01.166) 0:01:15.535 *********** 2025-06-02 20:04:29.794875 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.794885 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.794904 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.794927 | orchestrator | 2025-06-02 20:04:29.794938 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-02 20:04:29.794948 | orchestrator | Monday 02 June 2025 20:03:06 +0000 (0:00:00.438) 0:01:15.974 *********** 2025-06-02 20:04:29.794957 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.794967 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.794976 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.794986 | orchestrator | 2025-06-02 20:04:29.794996 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-02 20:04:29.795097 | orchestrator | Monday 02 June 2025 20:03:06 +0000 (0:00:00.305) 0:01:16.280 *********** 2025-06-02 20:04:29.795115 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.795125 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.795135 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.795145 | orchestrator | 2025-06-02 20:04:29.795179 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-02 20:04:29.795190 | orchestrator | Monday 02 June 2025 20:03:07 +0000 (0:00:00.575) 0:01:16.855 *********** 2025-06-02 20:04:29.795200 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.795219 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.795229 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.795239 | orchestrator | 2025-06-02 20:04:29.795249 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-02 20:04:29.795259 | orchestrator | Monday 02 June 2025 20:03:07 +0000 (0:00:00.326) 0:01:17.181 *********** 2025-06-02 20:04:29.795269 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.795279 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.795288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.795298 | orchestrator | 2025-06-02 20:04:29.795308 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-02 20:04:29.795409 | orchestrator | Monday 02 June 2025 20:03:07 +0000 (0:00:00.367) 0:01:17.549 *********** 2025-06-02 20:04:29.795426 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.795442 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.795544 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.795568 | orchestrator | 2025-06-02 20:04:29.795584 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-02 20:04:29.795602 | orchestrator | Monday 02 June 2025 20:03:08 +0000 (0:00:00.265) 0:01:17.814 *********** 2025-06-02 20:04:29.795692 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.795702 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.795712 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.795744 | orchestrator | 2025-06-02 20:04:29.795755 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-02 20:04:29.795765 | orchestrator | Monday 02 June 2025 20:03:08 +0000 (0:00:00.431) 0:01:18.246 *********** 2025-06-02 20:04:29.795785 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.795795 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.795805 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.795814 | orchestrator | 2025-06-02 20:04:29.795824 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-02 20:04:29.795833 | orchestrator | Monday 02 June 2025 20:03:08 +0000 (0:00:00.288) 0:01:18.535 *********** 2025-06-02 20:04:29.795843 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.795853 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.795871 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.795985 | orchestrator | 2025-06-02 20:04:29.796005 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-02 20:04:29.796114 | orchestrator | Monday 02 June 2025 20:03:09 +0000 (0:00:00.271) 0:01:18.806 *********** 2025-06-02 20:04:29.796131 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.796141 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.796150 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.796160 | orchestrator | 2025-06-02 20:04:29.796170 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-02 20:04:29.796179 | orchestrator | Monday 02 June 2025 20:03:09 +0000 (0:00:00.273) 0:01:19.079 *********** 2025-06-02 20:04:29.796189 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.796198 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.796208 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.796217 | orchestrator | 2025-06-02 20:04:29.796227 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-02 20:04:29.796286 | orchestrator | Monday 02 June 2025 20:03:09 +0000 (0:00:00.378) 0:01:19.458 *********** 2025-06-02 20:04:29.796297 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.796318 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.796328 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.796338 | orchestrator | 2025-06-02 20:04:29.796348 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-02 20:04:29.796375 | orchestrator | Monday 02 June 2025 20:03:09 +0000 (0:00:00.242) 0:01:19.700 *********** 2025-06-02 20:04:29.796386 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.796395 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.796405 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.796415 | orchestrator | 2025-06-02 20:04:29.796426 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-02 20:04:29.796436 | orchestrator | Monday 02 June 2025 20:03:10 +0000 (0:00:00.262) 0:01:19.962 *********** 2025-06-02 20:04:29.796446 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.796455 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.796479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.796489 | orchestrator | 2025-06-02 20:04:29.796499 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-02 20:04:29.796508 | orchestrator | Monday 02 June 2025 20:03:10 +0000 (0:00:00.263) 0:01:20.226 *********** 2025-06-02 20:04:29.796518 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.796527 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.796537 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.796546 | orchestrator | 2025-06-02 20:04:29.796562 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-02 20:04:29.796658 | orchestrator | Monday 02 June 2025 20:03:11 +0000 (0:00:00.515) 0:01:20.742 *********** 2025-06-02 20:04:29.796699 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.796732 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.796762 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.796780 | orchestrator | 2025-06-02 20:04:29.796795 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 20:04:29.796805 | orchestrator | Monday 02 June 2025 20:03:11 +0000 (0:00:00.528) 0:01:21.270 *********** 2025-06-02 20:04:29.796864 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:29.796874 | orchestrator | 2025-06-02 20:04:29.796884 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-02 20:04:29.796894 | orchestrator | Monday 02 June 2025 20:03:12 +0000 (0:00:01.091) 0:01:22.361 *********** 2025-06-02 20:04:29.796903 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.796913 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.796947 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.796957 | orchestrator | 2025-06-02 20:04:29.796967 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-02 20:04:29.796986 | orchestrator | Monday 02 June 2025 20:03:14 +0000 (0:00:02.196) 0:01:24.558 *********** 2025-06-02 20:04:29.796997 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.797006 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.797016 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.797025 | orchestrator | 2025-06-02 20:04:29.797188 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-02 20:04:29.797212 | orchestrator | Monday 02 June 2025 20:03:15 +0000 (0:00:00.559) 0:01:25.117 *********** 2025-06-02 20:04:29.797228 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.797244 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.797262 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.797279 | orchestrator | 2025-06-02 20:04:29.797297 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-02 20:04:29.797313 | orchestrator | Monday 02 June 2025 20:03:15 +0000 (0:00:00.313) 0:01:25.431 *********** 2025-06-02 20:04:29.797331 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.797348 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.797365 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.797381 | orchestrator | 2025-06-02 20:04:29.797397 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-02 20:04:29.797415 | orchestrator | Monday 02 June 2025 20:03:16 +0000 (0:00:00.379) 0:01:25.810 *********** 2025-06-02 20:04:29.797428 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.797441 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.797449 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.797455 | orchestrator | 2025-06-02 20:04:29.797462 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-02 20:04:29.797469 | orchestrator | Monday 02 June 2025 20:03:17 +0000 (0:00:00.918) 0:01:26.728 *********** 2025-06-02 20:04:29.797475 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.797482 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.797488 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.797495 | orchestrator | 2025-06-02 20:04:29.797502 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-02 20:04:29.797509 | orchestrator | Monday 02 June 2025 20:03:17 +0000 (0:00:00.406) 0:01:27.134 *********** 2025-06-02 20:04:29.797517 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.797536 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.797549 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.797555 | orchestrator | 2025-06-02 20:04:29.797562 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-02 20:04:29.797572 | orchestrator | Monday 02 June 2025 20:03:17 +0000 (0:00:00.517) 0:01:27.652 *********** 2025-06-02 20:04:29.797584 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.797595 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.797613 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.797620 | orchestrator | 2025-06-02 20:04:29.797626 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 20:04:29.797633 | orchestrator | Monday 02 June 2025 20:03:18 +0000 (0:00:00.438) 0:01:28.090 *********** 2025-06-02 20:04:29.797641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla2025-06-02 20:04:29 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:29.797692 | orchestrator | 2025-06-02 20:04:29 | INFO  | Task 4948f0d5-0321-432b-957b-728f2bc52f69 is in state SUCCESS 2025-06-02 20:04:29.797704 | orchestrator | 2025-06-02 20:04:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:29.797717 | orchestrator | _logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797811 | orchestrator | 2025-06-02 20:04:29.797820 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 20:04:29.797827 | orchestrator | Monday 02 June 2025 20:03:20 +0000 (0:00:01.662) 0:01:29.753 *********** 2025-06-02 20:04:29.797835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797912 | orchestrator | 2025-06-02 20:04:29.797919 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 20:04:29.797929 | orchestrator | Monday 02 June 2025 20:03:24 +0000 (0:00:04.141) 0:01:33.894 *********** 2025-06-02 20:04:29.797936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.797996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798083 | orchestrator | 2025-06-02 20:04:29.798095 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:04:29.798106 | orchestrator | Monday 02 June 2025 20:03:26 +0000 (0:00:02.402) 0:01:36.297 *********** 2025-06-02 20:04:29.798114 | orchestrator | 2025-06-02 20:04:29.798120 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:04:29.798127 | orchestrator | Monday 02 June 2025 20:03:26 +0000 (0:00:00.073) 0:01:36.370 *********** 2025-06-02 20:04:29.798133 | orchestrator | 2025-06-02 20:04:29.798140 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:04:29.798147 | orchestrator | Monday 02 June 2025 20:03:26 +0000 (0:00:00.067) 0:01:36.437 *********** 2025-06-02 20:04:29.798153 | orchestrator | 2025-06-02 20:04:29.798163 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 20:04:29.798170 | orchestrator | Monday 02 June 2025 20:03:26 +0000 (0:00:00.071) 0:01:36.509 *********** 2025-06-02 20:04:29.798176 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:29.798183 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:29.798189 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:29.798198 | orchestrator | 2025-06-02 20:04:29.798208 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 20:04:29.798215 | orchestrator | Monday 02 June 2025 20:03:34 +0000 (0:00:07.502) 0:01:44.011 *********** 2025-06-02 20:04:29.798222 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:29.798229 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:29.798236 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:29.798243 | orchestrator | 2025-06-02 20:04:29.798249 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 20:04:29.798256 | orchestrator | Monday 02 June 2025 20:03:41 +0000 (0:00:07.580) 0:01:51.592 *********** 2025-06-02 20:04:29.798263 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:29.798269 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:29.798276 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:29.798283 | orchestrator | 2025-06-02 20:04:29.798289 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 20:04:29.798296 | orchestrator | Monday 02 June 2025 20:03:49 +0000 (0:00:07.368) 0:01:58.961 *********** 2025-06-02 20:04:29.798303 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.798310 | orchestrator | 2025-06-02 20:04:29.798321 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 20:04:29.798333 | orchestrator | Monday 02 June 2025 20:03:49 +0000 (0:00:00.139) 0:01:59.100 *********** 2025-06-02 20:04:29.798346 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.798358 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.798370 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.798377 | orchestrator | 2025-06-02 20:04:29.798384 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 20:04:29.798394 | orchestrator | Monday 02 June 2025 20:03:50 +0000 (0:00:00.841) 0:01:59.941 *********** 2025-06-02 20:04:29.798404 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.798410 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.798417 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:29.798424 | orchestrator | 2025-06-02 20:04:29.798430 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 20:04:29.798437 | orchestrator | Monday 02 June 2025 20:03:51 +0000 (0:00:00.875) 0:02:00.817 *********** 2025-06-02 20:04:29.798444 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.798450 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.798457 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.798466 | orchestrator | 2025-06-02 20:04:29.798476 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 20:04:29.798487 | orchestrator | Monday 02 June 2025 20:03:51 +0000 (0:00:00.756) 0:02:01.573 *********** 2025-06-02 20:04:29.798498 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.798506 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.798513 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:29.798519 | orchestrator | 2025-06-02 20:04:29.798526 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 20:04:29.798533 | orchestrator | Monday 02 June 2025 20:03:52 +0000 (0:00:00.660) 0:02:02.234 *********** 2025-06-02 20:04:29.798539 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.798546 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.798553 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.798559 | orchestrator | 2025-06-02 20:04:29.798567 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 20:04:29.798579 | orchestrator | Monday 02 June 2025 20:03:53 +0000 (0:00:00.757) 0:02:02.992 *********** 2025-06-02 20:04:29.798591 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.798603 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.798615 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.798626 | orchestrator | 2025-06-02 20:04:29.798637 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-02 20:04:29.798649 | orchestrator | Monday 02 June 2025 20:03:54 +0000 (0:00:01.163) 0:02:04.155 *********** 2025-06-02 20:04:29.798662 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.798674 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.798685 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.798692 | orchestrator | 2025-06-02 20:04:29.798699 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 20:04:29.798706 | orchestrator | Monday 02 June 2025 20:03:54 +0000 (0:00:00.318) 0:02:04.473 *********** 2025-06-02 20:04:29.798713 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798720 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798727 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798734 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798742 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798749 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798787 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798800 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798808 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798815 | orchestrator | 2025-06-02 20:04:29.798821 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 20:04:29.798828 | orchestrator | Monday 02 June 2025 20:03:56 +0000 (0:00:01.380) 0:02:05.854 *********** 2025-06-02 20:04:29.798835 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798842 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798849 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798856 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798906 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798942 | orchestrator | 2025-06-02 20:04:29.798949 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 20:04:29.798955 | orchestrator | Monday 02 June 2025 20:04:00 +0000 (0:00:04.519) 0:02:10.374 *********** 2025-06-02 20:04:29.798962 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798969 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798976 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798986 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.798998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.799013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.799024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.799053 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.799063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:04:29.799074 | orchestrator | 2025-06-02 20:04:29.799081 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:04:29.799087 | orchestrator | Monday 02 June 2025 20:04:03 +0000 (0:00:03.200) 0:02:13.574 *********** 2025-06-02 20:04:29.799094 | orchestrator | 2025-06-02 20:04:29.799101 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:04:29.799107 | orchestrator | Monday 02 June 2025 20:04:03 +0000 (0:00:00.063) 0:02:13.637 *********** 2025-06-02 20:04:29.799114 | orchestrator | 2025-06-02 20:04:29.799121 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:04:29.799131 | orchestrator | Monday 02 June 2025 20:04:04 +0000 (0:00:00.069) 0:02:13.706 *********** 2025-06-02 20:04:29.799138 | orchestrator | 2025-06-02 20:04:29.799145 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 20:04:29.799152 | orchestrator | Monday 02 June 2025 20:04:04 +0000 (0:00:00.076) 0:02:13.783 *********** 2025-06-02 20:04:29.799158 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:29.799166 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:29.799178 | orchestrator | 2025-06-02 20:04:29.799204 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 20:04:29.799217 | orchestrator | Monday 02 June 2025 20:04:10 +0000 (0:00:06.166) 0:02:19.950 *********** 2025-06-02 20:04:29.799229 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:29.799237 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:29.799244 | orchestrator | 2025-06-02 20:04:29.799251 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 20:04:29.799257 | orchestrator | Monday 02 June 2025 20:04:16 +0000 (0:00:06.301) 0:02:26.251 *********** 2025-06-02 20:04:29.799264 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:29.799270 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:29.799277 | orchestrator | 2025-06-02 20:04:29.799286 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 20:04:29.799298 | orchestrator | Monday 02 June 2025 20:04:23 +0000 (0:00:06.602) 0:02:32.854 *********** 2025-06-02 20:04:29.799309 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:29.799316 | orchestrator | 2025-06-02 20:04:29.799331 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 20:04:29.799338 | orchestrator | Monday 02 June 2025 20:04:23 +0000 (0:00:00.142) 0:02:32.997 *********** 2025-06-02 20:04:29.799345 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.799355 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.799366 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.799377 | orchestrator | 2025-06-02 20:04:29.799384 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 20:04:29.799391 | orchestrator | Monday 02 June 2025 20:04:24 +0000 (0:00:01.003) 0:02:34.000 *********** 2025-06-02 20:04:29.799398 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.799404 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.799411 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:29.799418 | orchestrator | 2025-06-02 20:04:29.799424 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 20:04:29.799431 | orchestrator | Monday 02 June 2025 20:04:24 +0000 (0:00:00.700) 0:02:34.701 *********** 2025-06-02 20:04:29.799438 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.799444 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.799451 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.799458 | orchestrator | 2025-06-02 20:04:29.799470 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 20:04:29.799482 | orchestrator | Monday 02 June 2025 20:04:25 +0000 (0:00:00.827) 0:02:35.529 *********** 2025-06-02 20:04:29.799494 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:29.799507 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:29.799519 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:29.799529 | orchestrator | 2025-06-02 20:04:29.799536 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 20:04:29.799546 | orchestrator | Monday 02 June 2025 20:04:26 +0000 (0:00:00.625) 0:02:36.155 *********** 2025-06-02 20:04:29.799553 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.799560 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.799567 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.799573 | orchestrator | 2025-06-02 20:04:29.799585 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 20:04:29.799598 | orchestrator | Monday 02 June 2025 20:04:27 +0000 (0:00:00.962) 0:02:37.117 *********** 2025-06-02 20:04:29.799607 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:29.799613 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:29.799620 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:29.799627 | orchestrator | 2025-06-02 20:04:29.799633 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:04:29.799642 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 20:04:29.799654 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 20:04:29.799662 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 20:04:29.799668 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:04:29.799676 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:04:29.799682 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:04:29.799690 | orchestrator | 2025-06-02 20:04:29.799701 | orchestrator | 2025-06-02 20:04:29.799713 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:04:29.799723 | orchestrator | Monday 02 June 2025 20:04:28 +0000 (0:00:00.840) 0:02:37.958 *********** 2025-06-02 20:04:29.799733 | orchestrator | =============================================================================== 2025-06-02 20:04:29.799743 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 37.17s 2025-06-02 20:04:29.799754 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.02s 2025-06-02 20:04:29.799767 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.97s 2025-06-02 20:04:29.799791 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.88s 2025-06-02 20:04:29.799803 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.67s 2025-06-02 20:04:29.799813 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.52s 2025-06-02 20:04:29.799837 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.14s 2025-06-02 20:04:29.799849 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.20s 2025-06-02 20:04:29.799862 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.70s 2025-06-02 20:04:29.799874 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.40s 2025-06-02 20:04:29.799886 | orchestrator | ovn-db : Set bootstrap args fact for NB (new cluster) ------------------- 2.20s 2025-06-02 20:04:29.799898 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.72s 2025-06-02 20:04:29.799908 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.69s 2025-06-02 20:04:29.799919 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.67s 2025-06-02 20:04:29.799930 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.66s 2025-06-02 20:04:29.799941 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.38s 2025-06-02 20:04:29.799953 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.38s 2025-06-02 20:04:29.799966 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.35s 2025-06-02 20:04:29.799978 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.17s 2025-06-02 20:04:29.799991 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.16s 2025-06-02 20:04:32.812833 | orchestrator | 2025-06-02 20:04:32 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:32.814274 | orchestrator | 2025-06-02 20:04:32 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:32.814334 | orchestrator | 2025-06-02 20:04:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:35.854492 | orchestrator | 2025-06-02 20:04:35 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:35.854609 | orchestrator | 2025-06-02 20:04:35 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:35.854627 | orchestrator | 2025-06-02 20:04:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:38.905481 | orchestrator | 2025-06-02 20:04:38 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:38.911270 | orchestrator | 2025-06-02 20:04:38 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:38.911327 | orchestrator | 2025-06-02 20:04:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:41.955734 | orchestrator | 2025-06-02 20:04:41 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:41.956362 | orchestrator | 2025-06-02 20:04:41 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:41.956421 | orchestrator | 2025-06-02 20:04:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:44.990795 | orchestrator | 2025-06-02 20:04:44 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:44.992094 | orchestrator | 2025-06-02 20:04:44 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:44.992138 | orchestrator | 2025-06-02 20:04:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:48.035666 | orchestrator | 2025-06-02 20:04:48 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:48.036977 | orchestrator | 2025-06-02 20:04:48 | INFO  | Task 7b00080e-ae07-42f8-8125-423222339e59 is in state STARTED 2025-06-02 20:04:48.037918 | orchestrator | 2025-06-02 20:04:48 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:48.038420 | orchestrator | 2025-06-02 20:04:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:51.080269 | orchestrator | 2025-06-02 20:04:51 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:51.082710 | orchestrator | 2025-06-02 20:04:51 | INFO  | Task 7b00080e-ae07-42f8-8125-423222339e59 is in state STARTED 2025-06-02 20:04:51.082769 | orchestrator | 2025-06-02 20:04:51 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:51.082786 | orchestrator | 2025-06-02 20:04:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:54.117325 | orchestrator | 2025-06-02 20:04:54 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:54.118622 | orchestrator | 2025-06-02 20:04:54 | INFO  | Task 7b00080e-ae07-42f8-8125-423222339e59 is in state STARTED 2025-06-02 20:04:54.120867 | orchestrator | 2025-06-02 20:04:54 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:54.121488 | orchestrator | 2025-06-02 20:04:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:57.153756 | orchestrator | 2025-06-02 20:04:57 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:04:57.155711 | orchestrator | 2025-06-02 20:04:57 | INFO  | Task 7b00080e-ae07-42f8-8125-423222339e59 is in state STARTED 2025-06-02 20:04:57.156800 | orchestrator | 2025-06-02 20:04:57 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:04:57.156861 | orchestrator | 2025-06-02 20:04:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:00.185290 | orchestrator | 2025-06-02 20:05:00 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:00.185364 | orchestrator | 2025-06-02 20:05:00 | INFO  | Task 7b00080e-ae07-42f8-8125-423222339e59 is in state STARTED 2025-06-02 20:05:00.186741 | orchestrator | 2025-06-02 20:05:00 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:00.186824 | orchestrator | 2025-06-02 20:05:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:03.233673 | orchestrator | 2025-06-02 20:05:03 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:03.234388 | orchestrator | 2025-06-02 20:05:03 | INFO  | Task 7b00080e-ae07-42f8-8125-423222339e59 is in state SUCCESS 2025-06-02 20:05:03.235430 | orchestrator | 2025-06-02 20:05:03 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:03.235470 | orchestrator | 2025-06-02 20:05:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:06.286649 | orchestrator | 2025-06-02 20:05:06 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:06.287758 | orchestrator | 2025-06-02 20:05:06 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:06.287946 | orchestrator | 2025-06-02 20:05:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:09.346114 | orchestrator | 2025-06-02 20:05:09 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:09.348454 | orchestrator | 2025-06-02 20:05:09 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:09.348524 | orchestrator | 2025-06-02 20:05:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:12.398304 | orchestrator | 2025-06-02 20:05:12 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:12.399309 | orchestrator | 2025-06-02 20:05:12 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:12.399401 | orchestrator | 2025-06-02 20:05:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:15.450269 | orchestrator | 2025-06-02 20:05:15 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:15.454494 | orchestrator | 2025-06-02 20:05:15 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:15.454576 | orchestrator | 2025-06-02 20:05:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:18.510508 | orchestrator | 2025-06-02 20:05:18 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:18.512504 | orchestrator | 2025-06-02 20:05:18 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:18.513090 | orchestrator | 2025-06-02 20:05:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:21.560910 | orchestrator | 2025-06-02 20:05:21 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:21.561077 | orchestrator | 2025-06-02 20:05:21 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:21.561091 | orchestrator | 2025-06-02 20:05:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:24.601744 | orchestrator | 2025-06-02 20:05:24 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:24.602697 | orchestrator | 2025-06-02 20:05:24 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:24.602837 | orchestrator | 2025-06-02 20:05:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:27.638288 | orchestrator | 2025-06-02 20:05:27 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:27.639194 | orchestrator | 2025-06-02 20:05:27 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:27.639228 | orchestrator | 2025-06-02 20:05:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:30.684738 | orchestrator | 2025-06-02 20:05:30 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:30.685496 | orchestrator | 2025-06-02 20:05:30 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:30.685543 | orchestrator | 2025-06-02 20:05:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:33.736655 | orchestrator | 2025-06-02 20:05:33 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:33.737286 | orchestrator | 2025-06-02 20:05:33 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:33.737321 | orchestrator | 2025-06-02 20:05:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:36.777121 | orchestrator | 2025-06-02 20:05:36 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:36.777227 | orchestrator | 2025-06-02 20:05:36 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:36.777242 | orchestrator | 2025-06-02 20:05:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:39.818370 | orchestrator | 2025-06-02 20:05:39 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:39.818481 | orchestrator | 2025-06-02 20:05:39 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:39.818529 | orchestrator | 2025-06-02 20:05:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:42.861035 | orchestrator | 2025-06-02 20:05:42 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:42.861135 | orchestrator | 2025-06-02 20:05:42 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:42.861150 | orchestrator | 2025-06-02 20:05:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:45.905356 | orchestrator | 2025-06-02 20:05:45 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:45.906660 | orchestrator | 2025-06-02 20:05:45 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:45.906706 | orchestrator | 2025-06-02 20:05:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:48.951470 | orchestrator | 2025-06-02 20:05:48 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:48.953296 | orchestrator | 2025-06-02 20:05:48 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:48.953349 | orchestrator | 2025-06-02 20:05:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:52.003037 | orchestrator | 2025-06-02 20:05:51 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:52.004504 | orchestrator | 2025-06-02 20:05:52 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:52.004543 | orchestrator | 2025-06-02 20:05:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:55.054311 | orchestrator | 2025-06-02 20:05:55 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:55.054621 | orchestrator | 2025-06-02 20:05:55 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:55.054658 | orchestrator | 2025-06-02 20:05:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:58.107082 | orchestrator | 2025-06-02 20:05:58 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:05:58.107726 | orchestrator | 2025-06-02 20:05:58 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:05:58.107772 | orchestrator | 2025-06-02 20:05:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:01.150738 | orchestrator | 2025-06-02 20:06:01 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:01.150822 | orchestrator | 2025-06-02 20:06:01 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:01.150833 | orchestrator | 2025-06-02 20:06:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:04.193075 | orchestrator | 2025-06-02 20:06:04 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:04.193816 | orchestrator | 2025-06-02 20:06:04 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:04.193856 | orchestrator | 2025-06-02 20:06:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:07.235525 | orchestrator | 2025-06-02 20:06:07 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:07.236352 | orchestrator | 2025-06-02 20:06:07 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:07.236417 | orchestrator | 2025-06-02 20:06:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:10.261579 | orchestrator | 2025-06-02 20:06:10 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:10.263164 | orchestrator | 2025-06-02 20:06:10 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:10.263213 | orchestrator | 2025-06-02 20:06:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:13.293738 | orchestrator | 2025-06-02 20:06:13 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:13.293837 | orchestrator | 2025-06-02 20:06:13 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:13.293850 | orchestrator | 2025-06-02 20:06:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:16.341714 | orchestrator | 2025-06-02 20:06:16 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:16.342740 | orchestrator | 2025-06-02 20:06:16 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:16.342782 | orchestrator | 2025-06-02 20:06:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:19.382350 | orchestrator | 2025-06-02 20:06:19 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:19.384073 | orchestrator | 2025-06-02 20:06:19 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:19.384111 | orchestrator | 2025-06-02 20:06:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:22.423425 | orchestrator | 2025-06-02 20:06:22 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:22.426159 | orchestrator | 2025-06-02 20:06:22 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:22.426247 | orchestrator | 2025-06-02 20:06:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:25.473492 | orchestrator | 2025-06-02 20:06:25 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:25.475771 | orchestrator | 2025-06-02 20:06:25 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:25.475848 | orchestrator | 2025-06-02 20:06:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:28.515422 | orchestrator | 2025-06-02 20:06:28 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:28.516501 | orchestrator | 2025-06-02 20:06:28 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:28.516564 | orchestrator | 2025-06-02 20:06:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:31.568714 | orchestrator | 2025-06-02 20:06:31 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:31.569750 | orchestrator | 2025-06-02 20:06:31 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:31.569780 | orchestrator | 2025-06-02 20:06:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:34.609771 | orchestrator | 2025-06-02 20:06:34 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:34.610269 | orchestrator | 2025-06-02 20:06:34 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:34.610731 | orchestrator | 2025-06-02 20:06:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:37.664152 | orchestrator | 2025-06-02 20:06:37 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:37.668403 | orchestrator | 2025-06-02 20:06:37 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:37.668484 | orchestrator | 2025-06-02 20:06:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:40.712548 | orchestrator | 2025-06-02 20:06:40 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:40.713718 | orchestrator | 2025-06-02 20:06:40 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:40.713764 | orchestrator | 2025-06-02 20:06:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:43.757635 | orchestrator | 2025-06-02 20:06:43 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:43.760449 | orchestrator | 2025-06-02 20:06:43 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:43.760534 | orchestrator | 2025-06-02 20:06:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:46.797373 | orchestrator | 2025-06-02 20:06:46 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:46.801264 | orchestrator | 2025-06-02 20:06:46 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:46.801326 | orchestrator | 2025-06-02 20:06:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:49.839387 | orchestrator | 2025-06-02 20:06:49 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:49.840360 | orchestrator | 2025-06-02 20:06:49 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:49.840398 | orchestrator | 2025-06-02 20:06:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:52.880644 | orchestrator | 2025-06-02 20:06:52 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:52.882332 | orchestrator | 2025-06-02 20:06:52 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:52.882396 | orchestrator | 2025-06-02 20:06:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:55.928311 | orchestrator | 2025-06-02 20:06:55 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:55.930174 | orchestrator | 2025-06-02 20:06:55 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:55.930671 | orchestrator | 2025-06-02 20:06:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:58.977865 | orchestrator | 2025-06-02 20:06:58 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state STARTED 2025-06-02 20:06:58.978349 | orchestrator | 2025-06-02 20:06:58 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:06:58.981641 | orchestrator | 2025-06-02 20:06:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:02.042356 | orchestrator | 2025-06-02 20:07:02 | INFO  | Task 7b17b682-3a5d-4c92-8343-516bd6d79d13 is in state SUCCESS 2025-06-02 20:07:02.043331 | orchestrator | 2025-06-02 20:07:02.043377 | orchestrator | None 2025-06-02 20:07:02.043391 | orchestrator | 2025-06-02 20:07:02.043403 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:07:02.043415 | orchestrator | 2025-06-02 20:07:02.043426 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:07:02.043438 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.442) 0:00:00.442 *********** 2025-06-02 20:07:02.043449 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.043462 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.043473 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.043486 | orchestrator | 2025-06-02 20:07:02.043640 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:07:02.043744 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.386) 0:00:00.829 *********** 2025-06-02 20:07:02.043786 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-02 20:07:02.043798 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-02 20:07:02.043809 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-02 20:07:02.043820 | orchestrator | 2025-06-02 20:07:02.043837 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-02 20:07:02.043854 | orchestrator | 2025-06-02 20:07:02.043912 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 20:07:02.043939 | orchestrator | Monday 02 June 2025 20:00:43 +0000 (0:00:00.676) 0:00:01.506 *********** 2025-06-02 20:07:02.043958 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.043976 | orchestrator | 2025-06-02 20:07:02.043993 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-02 20:07:02.044010 | orchestrator | Monday 02 June 2025 20:00:44 +0000 (0:00:00.867) 0:00:02.373 *********** 2025-06-02 20:07:02.044028 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.044047 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.044065 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.044085 | orchestrator | 2025-06-02 20:07:02.044105 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 20:07:02.044126 | orchestrator | Monday 02 June 2025 20:00:45 +0000 (0:00:01.118) 0:00:03.491 *********** 2025-06-02 20:07:02.044145 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.044162 | orchestrator | 2025-06-02 20:07:02.044176 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-02 20:07:02.046328 | orchestrator | Monday 02 June 2025 20:00:46 +0000 (0:00:01.141) 0:00:04.632 *********** 2025-06-02 20:07:02.046353 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.046366 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.046378 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.046389 | orchestrator | 2025-06-02 20:07:02.046401 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-02 20:07:02.046412 | orchestrator | Monday 02 June 2025 20:00:47 +0000 (0:00:00.701) 0:00:05.334 *********** 2025-06-02 20:07:02.046424 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:07:02.046436 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:07:02.046446 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:07:02.046458 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:07:02.046469 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:07:02.046480 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:07:02.046491 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 20:07:02.046503 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 20:07:02.046514 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 20:07:02.046525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 20:07:02.046536 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 20:07:02.046547 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 20:07:02.046558 | orchestrator | 2025-06-02 20:07:02.046570 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 20:07:02.046581 | orchestrator | Monday 02 June 2025 20:00:51 +0000 (0:00:03.767) 0:00:09.102 *********** 2025-06-02 20:07:02.046592 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 20:07:02.046634 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 20:07:02.046647 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 20:07:02.046658 | orchestrator | 2025-06-02 20:07:02.046669 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 20:07:02.046680 | orchestrator | Monday 02 June 2025 20:00:51 +0000 (0:00:00.844) 0:00:09.946 *********** 2025-06-02 20:07:02.046692 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 20:07:02.046703 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 20:07:02.046714 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 20:07:02.046725 | orchestrator | 2025-06-02 20:07:02.046736 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 20:07:02.046747 | orchestrator | Monday 02 June 2025 20:00:53 +0000 (0:00:01.463) 0:00:11.409 *********** 2025-06-02 20:07:02.046770 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-02 20:07:02.046781 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.046832 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-02 20:07:02.046845 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.046856 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-02 20:07:02.046867 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.046920 | orchestrator | 2025-06-02 20:07:02.046931 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-02 20:07:02.046942 | orchestrator | Monday 02 June 2025 20:00:54 +0000 (0:00:01.033) 0:00:12.442 *********** 2025-06-02 20:07:02.046957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.046974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.046986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.046997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.047018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.047057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.047071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.047083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.047095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.047106 | orchestrator | 2025-06-02 20:07:02.047118 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-02 20:07:02.047129 | orchestrator | Monday 02 June 2025 20:00:56 +0000 (0:00:01.867) 0:00:14.309 *********** 2025-06-02 20:07:02.047140 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.047151 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.047162 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.047172 | orchestrator | 2025-06-02 20:07:02.047183 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-02 20:07:02.047195 | orchestrator | Monday 02 June 2025 20:00:57 +0000 (0:00:00.937) 0:00:15.247 *********** 2025-06-02 20:07:02.047206 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-02 20:07:02.047217 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-02 20:07:02.047228 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-02 20:07:02.047245 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-02 20:07:02.047256 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-02 20:07:02.047267 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-02 20:07:02.047277 | orchestrator | 2025-06-02 20:07:02.047288 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-02 20:07:02.047299 | orchestrator | Monday 02 June 2025 20:00:59 +0000 (0:00:02.084) 0:00:17.331 *********** 2025-06-02 20:07:02.047311 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.047321 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.047347 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.047369 | orchestrator | 2025-06-02 20:07:02.047380 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-02 20:07:02.047391 | orchestrator | Monday 02 June 2025 20:01:01 +0000 (0:00:01.815) 0:00:19.146 *********** 2025-06-02 20:07:02.047402 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.047413 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.047424 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.047435 | orchestrator | 2025-06-02 20:07:02.047446 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-02 20:07:02.047456 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:01.854) 0:00:21.001 *********** 2025-06-02 20:07:02.047468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.047504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.047517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.047530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:07:02.047542 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.047553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.047572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.047583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.047595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:07:02.047606 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.047640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.047653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.047665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.047683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:07:02.047694 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.047705 | orchestrator | 2025-06-02 20:07:02.047717 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-02 20:07:02.047728 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:01.362) 0:00:22.363 *********** 2025-06-02 20:07:02.047739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.047751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.047785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.047798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.047810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.047831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.047843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.047855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:07:02.047867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:07:02.047912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.047924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.047944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575', '__omit_place_holder__ce6575d6cfcff944e79ee6758d328a9b1705f575'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:07:02.047955 | orchestrator | 2025-06-02 20:07:02.047967 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-02 20:07:02.047978 | orchestrator | Monday 02 June 2025 20:01:08 +0000 (0:00:03.819) 0:00:26.182 *********** 2025-06-02 20:07:02.047990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.048095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.048106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.048118 | orchestrator | 2025-06-02 20:07:02.048129 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-02 20:07:02.048140 | orchestrator | Monday 02 June 2025 20:01:11 +0000 (0:00:03.662) 0:00:29.845 *********** 2025-06-02 20:07:02.048151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 20:07:02.048162 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 20:07:02.048173 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 20:07:02.048184 | orchestrator | 2025-06-02 20:07:02.048195 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-02 20:07:02.048206 | orchestrator | Monday 02 June 2025 20:01:14 +0000 (0:00:02.353) 0:00:32.199 *********** 2025-06-02 20:07:02.048217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 20:07:02.048228 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 20:07:02.048247 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 20:07:02.048258 | orchestrator | 2025-06-02 20:07:02.048270 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-02 20:07:02.048287 | orchestrator | Monday 02 June 2025 20:01:18 +0000 (0:00:04.574) 0:00:36.773 *********** 2025-06-02 20:07:02.048298 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.048309 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.048320 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.048331 | orchestrator | 2025-06-02 20:07:02.048342 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-02 20:07:02.048353 | orchestrator | Monday 02 June 2025 20:01:19 +0000 (0:00:00.709) 0:00:37.483 *********** 2025-06-02 20:07:02.048364 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 20:07:02.048377 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 20:07:02.048388 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 20:07:02.048399 | orchestrator | 2025-06-02 20:07:02.048410 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-02 20:07:02.048421 | orchestrator | Monday 02 June 2025 20:01:22 +0000 (0:00:02.516) 0:00:40.000 *********** 2025-06-02 20:07:02.048431 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 20:07:02.048442 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 20:07:02.048454 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 20:07:02.048464 | orchestrator | 2025-06-02 20:07:02.048475 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-02 20:07:02.048486 | orchestrator | Monday 02 June 2025 20:01:23 +0000 (0:00:01.929) 0:00:41.930 *********** 2025-06-02 20:07:02.048498 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-02 20:07:02.048509 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-02 20:07:02.048520 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-02 20:07:02.048531 | orchestrator | 2025-06-02 20:07:02.048541 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-02 20:07:02.048552 | orchestrator | Monday 02 June 2025 20:01:25 +0000 (0:00:01.699) 0:00:43.629 *********** 2025-06-02 20:07:02.048563 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-02 20:07:02.048574 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-02 20:07:02.048585 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-02 20:07:02.048596 | orchestrator | 2025-06-02 20:07:02.048606 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 20:07:02.048617 | orchestrator | Monday 02 June 2025 20:01:27 +0000 (0:00:01.732) 0:00:45.362 *********** 2025-06-02 20:07:02.048628 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.048639 | orchestrator | 2025-06-02 20:07:02.048650 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-02 20:07:02.048661 | orchestrator | Monday 02 June 2025 20:01:28 +0000 (0:00:00.970) 0:00:46.333 *********** 2025-06-02 20:07:02.048672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.048803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.048814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.048832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.048843 | orchestrator | 2025-06-02 20:07:02.048854 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-02 20:07:02.048865 | orchestrator | Monday 02 June 2025 20:01:31 +0000 (0:00:03.297) 0:00:49.630 *********** 2025-06-02 20:07:02.048925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.048938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.048949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.048960 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.048972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.048983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049014 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.049030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049084 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.049095 | orchestrator | 2025-06-02 20:07:02.049107 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-02 20:07:02.049118 | orchestrator | Monday 02 June 2025 20:01:32 +0000 (0:00:00.822) 0:00:50.452 *********** 2025-06-02 20:07:02.049129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049171 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.049182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049239 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.049250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049291 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.049302 | orchestrator | 2025-06-02 20:07:02.049313 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 20:07:02.049324 | orchestrator | Monday 02 June 2025 20:01:34 +0000 (0:00:02.128) 0:00:52.581 *********** 2025-06-02 20:07:02.049335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049391 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.049403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049443 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.049455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049499 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.049510 | orchestrator | 2025-06-02 20:07:02.049521 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 20:07:02.049533 | orchestrator | Monday 02 June 2025 20:01:35 +0000 (0:00:00.720) 0:00:53.302 *********** 2025-06-02 20:07:02.049544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049589 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.049600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049634 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.049658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049699 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.049710 | orchestrator | 2025-06-02 20:07:02.049721 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 20:07:02.049732 | orchestrator | Monday 02 June 2025 20:01:36 +0000 (0:00:00.668) 0:00:53.970 *********** 2025-06-02 20:07:02.049749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049795 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.049829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049914 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.049928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.049939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.049951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.049962 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.049973 | orchestrator | 2025-06-02 20:07:02.049984 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-02 20:07:02.049995 | orchestrator | Monday 02 June 2025 20:01:37 +0000 (0:00:01.643) 0:00:55.614 *********** 2025-06-02 20:07:02.050006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.050067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.050090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.050102 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.050113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.050125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.050136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.050148 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.050159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.050180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.050192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.050210 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.050221 | orchestrator | 2025-06-02 20:07:02.050232 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-02 20:07:02.050243 | orchestrator | Monday 02 June 2025 20:01:39 +0000 (0:00:01.740) 0:00:57.354 *********** 2025-06-02 20:07:02.050255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.050267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.050278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.050289 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.050300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.050312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.050335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.050356 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.050368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.050379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.050391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.050402 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.050413 | orchestrator | 2025-06-02 20:07:02.050425 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-02 20:07:02.050436 | orchestrator | Monday 02 June 2025 20:01:40 +0000 (0:00:00.725) 0:00:58.079 *********** 2025-06-02 20:07:02.050447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.050458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.050475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.050502 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.050548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.050571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.050590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.050610 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.050630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:07:02.050643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:07:02.050654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:07:02.050672 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.050684 | orchestrator | 2025-06-02 20:07:02.050695 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-02 20:07:02.050706 | orchestrator | Monday 02 June 2025 20:01:41 +0000 (0:00:01.303) 0:00:59.383 *********** 2025-06-02 20:07:02.050717 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 20:07:02.050734 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 20:07:02.050763 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 20:07:02.050776 | orchestrator | 2025-06-02 20:07:02.050788 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-02 20:07:02.050808 | orchestrator | Monday 02 June 2025 20:01:42 +0000 (0:00:01.511) 0:01:00.894 *********** 2025-06-02 20:07:02.050826 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 20:07:02.050846 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 20:07:02.050863 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 20:07:02.050952 | orchestrator | 2025-06-02 20:07:02.050972 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-02 20:07:02.050989 | orchestrator | Monday 02 June 2025 20:01:44 +0000 (0:00:01.436) 0:01:02.331 *********** 2025-06-02 20:07:02.051000 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:07:02.051011 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:07:02.051022 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:07:02.051033 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.051047 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:07:02.051067 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:07:02.051086 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.051106 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:07:02.051126 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.051139 | orchestrator | 2025-06-02 20:07:02.051150 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-02 20:07:02.051161 | orchestrator | Monday 02 June 2025 20:01:45 +0000 (0:00:01.240) 0:01:03.571 *********** 2025-06-02 20:07:02.051175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.051197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.051236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 20:07:02.051270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.051286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.051305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:07:02.051334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.051356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.051375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:07:02.051407 | orchestrator | 2025-06-02 20:07:02.051424 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-02 20:07:02.051440 | orchestrator | Monday 02 June 2025 20:01:48 +0000 (0:00:02.936) 0:01:06.508 *********** 2025-06-02 20:07:02.051457 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.051472 | orchestrator | 2025-06-02 20:07:02.051487 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-02 20:07:02.051502 | orchestrator | Monday 02 June 2025 20:01:49 +0000 (0:00:00.767) 0:01:07.275 *********** 2025-06-02 20:07:02.051548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 20:07:02.051565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.051583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.051599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.051618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 20:07:02.051646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.051664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.051689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.051700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 20:07:02.051711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.051721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.051737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.051747 | orchestrator | 2025-06-02 20:07:02.051757 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-02 20:07:02.051767 | orchestrator | Monday 02 June 2025 20:01:52 +0000 (0:00:03.586) 0:01:10.861 *********** 2025-06-02 20:07:02.051777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 20:07:02.051801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.051812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.051822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.051832 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.051842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 20:07:02.051890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.051911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.051933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.051959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 20:07:02.051976 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.051991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.052017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052052 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.052069 | orchestrator | 2025-06-02 20:07:02.052086 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-02 20:07:02.052103 | orchestrator | Monday 02 June 2025 20:01:53 +0000 (0:00:00.770) 0:01:11.632 *********** 2025-06-02 20:07:02.052122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:07:02.052139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:07:02.052154 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.052164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:07:02.052174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:07:02.052184 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.052194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:07:02.052204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:07:02.052214 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.052224 | orchestrator | 2025-06-02 20:07:02.052241 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-02 20:07:02.052251 | orchestrator | Monday 02 June 2025 20:01:54 +0000 (0:00:01.151) 0:01:12.784 *********** 2025-06-02 20:07:02.052261 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.052271 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.052280 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.052290 | orchestrator | 2025-06-02 20:07:02.052299 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-02 20:07:02.052309 | orchestrator | Monday 02 June 2025 20:01:56 +0000 (0:00:01.526) 0:01:14.310 *********** 2025-06-02 20:07:02.052319 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.052329 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.052339 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.052348 | orchestrator | 2025-06-02 20:07:02.052358 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-02 20:07:02.052375 | orchestrator | Monday 02 June 2025 20:01:58 +0000 (0:00:02.051) 0:01:16.362 *********** 2025-06-02 20:07:02.052384 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.052394 | orchestrator | 2025-06-02 20:07:02.052403 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-02 20:07:02.052413 | orchestrator | Monday 02 June 2025 20:01:59 +0000 (0:00:00.635) 0:01:16.998 *********** 2025-06-02 20:07:02.052424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.052455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.052510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.052548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052568 | orchestrator | 2025-06-02 20:07:02.052578 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-02 20:07:02.052588 | orchestrator | Monday 02 June 2025 20:02:03 +0000 (0:00:04.267) 0:01:21.265 *********** 2025-06-02 20:07:02.052613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.052632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052653 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.052664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.052674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.052694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.052733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.052754 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.052764 | orchestrator | 2025-06-02 20:07:02.052774 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-02 20:07:02.052784 | orchestrator | Monday 02 June 2025 20:02:04 +0000 (0:00:01.255) 0:01:22.521 *********** 2025-06-02 20:07:02.052794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:07:02.052805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:07:02.052816 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.052826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:07:02.052836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:07:02.052846 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.052855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:07:02.052865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:07:02.052914 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.052933 | orchestrator | 2025-06-02 20:07:02.052949 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-02 20:07:02.052964 | orchestrator | Monday 02 June 2025 20:02:05 +0000 (0:00:01.303) 0:01:23.825 *********** 2025-06-02 20:07:02.052975 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.052984 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.052994 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.053004 | orchestrator | 2025-06-02 20:07:02.053014 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-02 20:07:02.053024 | orchestrator | Monday 02 June 2025 20:02:07 +0000 (0:00:01.708) 0:01:25.533 *********** 2025-06-02 20:07:02.053039 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.053049 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.053059 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.053068 | orchestrator | 2025-06-02 20:07:02.053096 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-02 20:07:02.053107 | orchestrator | Monday 02 June 2025 20:02:09 +0000 (0:00:01.985) 0:01:27.519 *********** 2025-06-02 20:07:02.053117 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.053126 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.053136 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.053146 | orchestrator | 2025-06-02 20:07:02.053156 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-02 20:07:02.053166 | orchestrator | Monday 02 June 2025 20:02:09 +0000 (0:00:00.322) 0:01:27.841 *********** 2025-06-02 20:07:02.053175 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.053185 | orchestrator | 2025-06-02 20:07:02.053194 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-02 20:07:02.053204 | orchestrator | Monday 02 June 2025 20:02:10 +0000 (0:00:00.638) 0:01:28.479 *********** 2025-06-02 20:07:02.053214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 20:07:02.053226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 20:07:02.053236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 20:07:02.053253 | orchestrator | 2025-06-02 20:07:02.053263 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-02 20:07:02.053273 | orchestrator | Monday 02 June 2025 20:02:13 +0000 (0:00:03.075) 0:01:31.555 *********** 2025-06-02 20:07:02.053303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 20:07:02.053314 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.053325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 20:07:02.053336 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.053346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 20:07:02.053356 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.053367 | orchestrator | 2025-06-02 20:07:02.053376 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-02 20:07:02.053386 | orchestrator | Monday 02 June 2025 20:02:14 +0000 (0:00:01.388) 0:01:32.944 *********** 2025-06-02 20:07:02.053398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:07:02.053415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:07:02.053427 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.053437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:07:02.053447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:07:02.053458 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.053477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:07:02.053488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:07:02.053498 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.053508 | orchestrator | 2025-06-02 20:07:02.053517 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-02 20:07:02.053527 | orchestrator | Monday 02 June 2025 20:02:16 +0000 (0:00:01.592) 0:01:34.536 *********** 2025-06-02 20:07:02.053537 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.053547 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.053557 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.053567 | orchestrator | 2025-06-02 20:07:02.053576 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-02 20:07:02.053586 | orchestrator | Monday 02 June 2025 20:02:17 +0000 (0:00:00.849) 0:01:35.386 *********** 2025-06-02 20:07:02.053596 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.053606 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.053616 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.053626 | orchestrator | 2025-06-02 20:07:02.053636 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-02 20:07:02.053645 | orchestrator | Monday 02 June 2025 20:02:18 +0000 (0:00:01.013) 0:01:36.400 *********** 2025-06-02 20:07:02.053655 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.053665 | orchestrator | 2025-06-02 20:07:02.053675 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-02 20:07:02.053685 | orchestrator | Monday 02 June 2025 20:02:19 +0000 (0:00:00.948) 0:01:37.349 *********** 2025-06-02 20:07:02.053701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.053712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.053723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.053754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.053765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.053783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.053793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.053803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.053832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.053843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.053854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.053919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.053932 | orchestrator | 2025-06-02 20:07:02.053942 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-02 20:07:02.053952 | orchestrator | Monday 02 June 2025 20:02:22 +0000 (0:00:03.538) 0:01:40.888 *********** 2025-06-02 20:07:02.053962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.053980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054070 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.054080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.054090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054133 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.054143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.054160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054190 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.054200 | orchestrator | 2025-06-02 20:07:02.054211 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-02 20:07:02.054220 | orchestrator | Monday 02 June 2025 20:02:24 +0000 (0:00:01.171) 0:01:42.060 *********** 2025-06-02 20:07:02.054231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:07:02.054253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:07:02.054264 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.054275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:07:02.054285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:07:02.054301 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.054311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:07:02.054321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:07:02.054331 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.054341 | orchestrator | 2025-06-02 20:07:02.054350 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-02 20:07:02.054360 | orchestrator | Monday 02 June 2025 20:02:25 +0000 (0:00:00.942) 0:01:43.002 *********** 2025-06-02 20:07:02.054370 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.054379 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.054389 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.054399 | orchestrator | 2025-06-02 20:07:02.054409 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-02 20:07:02.054418 | orchestrator | Monday 02 June 2025 20:02:26 +0000 (0:00:01.144) 0:01:44.146 *********** 2025-06-02 20:07:02.054428 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.054438 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.054448 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.054457 | orchestrator | 2025-06-02 20:07:02.054465 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-02 20:07:02.054473 | orchestrator | Monday 02 June 2025 20:02:28 +0000 (0:00:02.160) 0:01:46.307 *********** 2025-06-02 20:07:02.054481 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.054489 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.054497 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.054505 | orchestrator | 2025-06-02 20:07:02.054513 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-02 20:07:02.054521 | orchestrator | Monday 02 June 2025 20:02:28 +0000 (0:00:00.533) 0:01:46.841 *********** 2025-06-02 20:07:02.054529 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.054537 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.054545 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.054553 | orchestrator | 2025-06-02 20:07:02.054561 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-02 20:07:02.054568 | orchestrator | Monday 02 June 2025 20:02:29 +0000 (0:00:00.331) 0:01:47.172 *********** 2025-06-02 20:07:02.054576 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.054584 | orchestrator | 2025-06-02 20:07:02.054592 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-02 20:07:02.054600 | orchestrator | Monday 02 June 2025 20:02:30 +0000 (0:00:00.933) 0:01:48.105 *********** 2025-06-02 20:07:02.054608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:07:02.054630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:07:02.054639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:07:02.054703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:07:02.054712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:07:02.054721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:07:02.054729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054834 | orchestrator | 2025-06-02 20:07:02.054842 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-02 20:07:02.054850 | orchestrator | Monday 02 June 2025 20:02:34 +0000 (0:00:04.664) 0:01:52.769 *********** 2025-06-02 20:07:02.054887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:07:02.054897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:07:02.054906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.054973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:07:02.054982 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.054990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:07:02.054999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.055008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.055020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.055045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.055054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.055063 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.055071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:07:02.055079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:07:02.055088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.055101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.055109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.055136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.055146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.055154 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.055163 | orchestrator | 2025-06-02 20:07:02.055171 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-02 20:07:02.055179 | orchestrator | Monday 02 June 2025 20:02:35 +0000 (0:00:00.823) 0:01:53.592 *********** 2025-06-02 20:07:02.055187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:07:02.055196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:07:02.055205 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.055214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:07:02.055221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:07:02.055234 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.055242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:07:02.055250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:07:02.055258 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.055266 | orchestrator | 2025-06-02 20:07:02.055275 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-02 20:07:02.055283 | orchestrator | Monday 02 June 2025 20:02:36 +0000 (0:00:00.971) 0:01:54.564 *********** 2025-06-02 20:07:02.055291 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.055299 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.055306 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.055314 | orchestrator | 2025-06-02 20:07:02.055323 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-02 20:07:02.055331 | orchestrator | Monday 02 June 2025 20:02:38 +0000 (0:00:01.762) 0:01:56.327 *********** 2025-06-02 20:07:02.055339 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.055346 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.055354 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.055362 | orchestrator | 2025-06-02 20:07:02.055371 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-02 20:07:02.055379 | orchestrator | Monday 02 June 2025 20:02:40 +0000 (0:00:01.994) 0:01:58.321 *********** 2025-06-02 20:07:02.055386 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.055394 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.055402 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.055410 | orchestrator | 2025-06-02 20:07:02.055418 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-02 20:07:02.055426 | orchestrator | Monday 02 June 2025 20:02:40 +0000 (0:00:00.312) 0:01:58.634 *********** 2025-06-02 20:07:02.055434 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.055442 | orchestrator | 2025-06-02 20:07:02.055450 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-02 20:07:02.055458 | orchestrator | Monday 02 June 2025 20:02:41 +0000 (0:00:00.775) 0:01:59.410 *********** 2025-06-02 20:07:02.055487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:07:02.055504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.055524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:07:02.055534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.055554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:07:02.055564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.055578 | orchestrator | 2025-06-02 20:07:02.055599 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-02 20:07:02.055607 | orchestrator | Monday 02 June 2025 20:02:46 +0000 (0:00:04.769) 0:02:04.179 *********** 2025-06-02 20:07:02.055624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:07:02.055635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.055648 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.055658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:07:02.055677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.055695 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.055704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:07:02.055722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.055737 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.055745 | orchestrator | 2025-06-02 20:07:02.055753 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-02 20:07:02.055761 | orchestrator | Monday 02 June 2025 20:02:50 +0000 (0:00:04.363) 0:02:08.543 *********** 2025-06-02 20:07:02.055770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:07:02.055779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:07:02.055787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:07:02.055796 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.055804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:07:02.055813 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.055821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:07:02.055839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:07:02.055852 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.055861 | orchestrator | 2025-06-02 20:07:02.055886 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-02 20:07:02.055894 | orchestrator | Monday 02 June 2025 20:02:54 +0000 (0:00:03.636) 0:02:12.179 *********** 2025-06-02 20:07:02.055903 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.055911 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.055919 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.055927 | orchestrator | 2025-06-02 20:07:02.055935 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-02 20:07:02.055943 | orchestrator | Monday 02 June 2025 20:02:55 +0000 (0:00:01.538) 0:02:13.718 *********** 2025-06-02 20:07:02.055951 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.055959 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.055967 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.055975 | orchestrator | 2025-06-02 20:07:02.055983 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-02 20:07:02.055991 | orchestrator | Monday 02 June 2025 20:02:57 +0000 (0:00:01.869) 0:02:15.587 *********** 2025-06-02 20:07:02.055999 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.056007 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.056015 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.056023 | orchestrator | 2025-06-02 20:07:02.056032 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-02 20:07:02.056040 | orchestrator | Monday 02 June 2025 20:02:57 +0000 (0:00:00.289) 0:02:15.877 *********** 2025-06-02 20:07:02.056048 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.056056 | orchestrator | 2025-06-02 20:07:02.056064 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-02 20:07:02.056072 | orchestrator | Monday 02 June 2025 20:02:58 +0000 (0:00:00.780) 0:02:16.658 *********** 2025-06-02 20:07:02.056080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:07:02.056089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:07:02.056098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:07:02.056112 | orchestrator | 2025-06-02 20:07:02.056120 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-02 20:07:02.056128 | orchestrator | Monday 02 June 2025 20:03:04 +0000 (0:00:05.383) 0:02:22.042 *********** 2025-06-02 20:07:02.056145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:07:02.056154 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.056163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:07:02.056171 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.056180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:07:02.056188 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.056197 | orchestrator | 2025-06-02 20:07:02.056205 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-02 20:07:02.056213 | orchestrator | Monday 02 June 2025 20:03:04 +0000 (0:00:00.745) 0:02:22.787 *********** 2025-06-02 20:07:02.056221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:07:02.056229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:07:02.056237 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.056245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:07:02.056253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:07:02.056264 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.056278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:07:02.056299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:07:02.056320 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.056335 | orchestrator | 2025-06-02 20:07:02.056348 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-02 20:07:02.056361 | orchestrator | Monday 02 June 2025 20:03:05 +0000 (0:00:00.905) 0:02:23.692 *********** 2025-06-02 20:07:02.056374 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.056387 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.056401 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.056415 | orchestrator | 2025-06-02 20:07:02.056428 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-02 20:07:02.056441 | orchestrator | Monday 02 June 2025 20:03:07 +0000 (0:00:01.678) 0:02:25.371 *********** 2025-06-02 20:07:02.056455 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.056468 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.056481 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.056494 | orchestrator | 2025-06-02 20:07:02.056508 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-02 20:07:02.056528 | orchestrator | Monday 02 June 2025 20:03:09 +0000 (0:00:02.012) 0:02:27.383 *********** 2025-06-02 20:07:02.056543 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.056557 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.056592 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.056609 | orchestrator | 2025-06-02 20:07:02.056623 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-02 20:07:02.056633 | orchestrator | Monday 02 June 2025 20:03:09 +0000 (0:00:00.290) 0:02:27.674 *********** 2025-06-02 20:07:02.056641 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.056649 | orchestrator | 2025-06-02 20:07:02.056657 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-02 20:07:02.056665 | orchestrator | Monday 02 June 2025 20:03:10 +0000 (0:00:00.833) 0:02:28.508 *********** 2025-06-02 20:07:02.056675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:07:02.056703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:07:02.056714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:07:02.056730 | orchestrator | 2025-06-02 20:07:02.056738 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-02 20:07:02.056746 | orchestrator | Monday 02 June 2025 20:03:16 +0000 (0:00:05.700) 0:02:34.208 *********** 2025-06-02 20:07:02.056775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:07:02.056785 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.056794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:07:02.056809 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.056835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:07:02.056846 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.056854 | orchestrator | 2025-06-02 20:07:02.056862 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-02 20:07:02.056925 | orchestrator | Monday 02 June 2025 20:03:17 +0000 (0:00:00.996) 0:02:35.204 *********** 2025-06-02 20:07:02.056936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:07:02.056954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:07:02.056963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:07:02.056971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:07:02.056979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:07:02.056987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:07:02.056994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:07:02.057017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:07:02.057026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 20:07:02.057033 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.057040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 20:07:02.057047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:07:02.057054 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.057062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:07:02.057069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:07:02.057081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:07:02.057088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 20:07:02.057095 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.057102 | orchestrator | 2025-06-02 20:07:02.057109 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-02 20:07:02.057116 | orchestrator | Monday 02 June 2025 20:03:18 +0000 (0:00:01.272) 0:02:36.477 *********** 2025-06-02 20:07:02.057123 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.057130 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.057137 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.057143 | orchestrator | 2025-06-02 20:07:02.057150 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-02 20:07:02.057157 | orchestrator | Monday 02 June 2025 20:03:20 +0000 (0:00:01.791) 0:02:38.268 *********** 2025-06-02 20:07:02.057164 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.057171 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.057178 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.057185 | orchestrator | 2025-06-02 20:07:02.057192 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-02 20:07:02.057199 | orchestrator | Monday 02 June 2025 20:03:22 +0000 (0:00:02.198) 0:02:40.467 *********** 2025-06-02 20:07:02.057206 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.057213 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.057219 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.057226 | orchestrator | 2025-06-02 20:07:02.057233 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-02 20:07:02.057240 | orchestrator | Monday 02 June 2025 20:03:22 +0000 (0:00:00.309) 0:02:40.776 *********** 2025-06-02 20:07:02.057247 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.057254 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.057261 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.057267 | orchestrator | 2025-06-02 20:07:02.057274 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-02 20:07:02.057281 | orchestrator | Monday 02 June 2025 20:03:23 +0000 (0:00:00.274) 0:02:41.051 *********** 2025-06-02 20:07:02.057288 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.057295 | orchestrator | 2025-06-02 20:07:02.057302 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-02 20:07:02.057309 | orchestrator | Monday 02 June 2025 20:03:24 +0000 (0:00:01.030) 0:02:42.082 *********** 2025-06-02 20:07:02.057331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:07:02.057345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:07:02.057358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:07:02.057371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:07:02.057384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:07:02.057396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:07:02.057431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:07:02.057452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:07:02.057465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:07:02.057477 | orchestrator | 2025-06-02 20:07:02.057488 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-02 20:07:02.057500 | orchestrator | Monday 02 June 2025 20:03:27 +0000 (0:00:03.445) 0:02:45.528 *********** 2025-06-02 20:07:02.057513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:07:02.057525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:07:02.057561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:07:02.057582 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.057595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:07:02.057607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:07:02.057619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:07:02.057631 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.057642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:07:02.057682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:07:02.057691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:07:02.057707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.057715 | orchestrator | 2025-06-02 20:07:02.057722 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-02 20:07:02.057729 | orchestrator | Monday 02 June 2025 20:03:28 +0000 (0:00:00.642) 0:02:46.170 *********** 2025-06-02 20:07:02.057736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:07:02.057744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:07:02.057751 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.057758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:07:02.057765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:07:02.057772 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.057779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:07:02.057786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:07:02.057793 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.057800 | orchestrator | 2025-06-02 20:07:02.057806 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-02 20:07:02.057813 | orchestrator | Monday 02 June 2025 20:03:29 +0000 (0:00:01.023) 0:02:47.194 *********** 2025-06-02 20:07:02.057820 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.057826 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.057833 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.057840 | orchestrator | 2025-06-02 20:07:02.057851 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-02 20:07:02.057858 | orchestrator | Monday 02 June 2025 20:03:30 +0000 (0:00:01.269) 0:02:48.464 *********** 2025-06-02 20:07:02.057865 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.057891 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.057898 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.057905 | orchestrator | 2025-06-02 20:07:02.057912 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-02 20:07:02.057919 | orchestrator | Monday 02 June 2025 20:03:32 +0000 (0:00:01.984) 0:02:50.448 *********** 2025-06-02 20:07:02.057925 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.057932 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.057939 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.057945 | orchestrator | 2025-06-02 20:07:02.057952 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-02 20:07:02.057959 | orchestrator | Monday 02 June 2025 20:03:32 +0000 (0:00:00.314) 0:02:50.762 *********** 2025-06-02 20:07:02.057965 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.057972 | orchestrator | 2025-06-02 20:07:02.057979 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-02 20:07:02.057989 | orchestrator | Monday 02 June 2025 20:03:34 +0000 (0:00:01.210) 0:02:51.973 *********** 2025-06-02 20:07:02.058009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:07:02.058039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:07:02.058061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:07:02.058084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058091 | orchestrator | 2025-06-02 20:07:02.058098 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-02 20:07:02.058105 | orchestrator | Monday 02 June 2025 20:03:37 +0000 (0:00:03.736) 0:02:55.710 *********** 2025-06-02 20:07:02.058112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:07:02.058119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058133 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.058140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:07:02.058162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058169 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.058177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:07:02.058184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058191 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.058198 | orchestrator | 2025-06-02 20:07:02.058204 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-02 20:07:02.058211 | orchestrator | Monday 02 June 2025 20:03:38 +0000 (0:00:00.635) 0:02:56.346 *********** 2025-06-02 20:07:02.058222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:07:02.058230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:07:02.058237 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.058243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:07:02.058250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:07:02.058257 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.058264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:07:02.058270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:07:02.058277 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.058283 | orchestrator | 2025-06-02 20:07:02.058290 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-02 20:07:02.058297 | orchestrator | Monday 02 June 2025 20:03:39 +0000 (0:00:01.352) 0:02:57.698 *********** 2025-06-02 20:07:02.058304 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.058311 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.058317 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.058324 | orchestrator | 2025-06-02 20:07:02.058331 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-02 20:07:02.058337 | orchestrator | Monday 02 June 2025 20:03:40 +0000 (0:00:01.225) 0:02:58.924 *********** 2025-06-02 20:07:02.058344 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.058351 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.058358 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.058364 | orchestrator | 2025-06-02 20:07:02.058371 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-02 20:07:02.058381 | orchestrator | Monday 02 June 2025 20:03:42 +0000 (0:00:02.013) 0:03:00.938 *********** 2025-06-02 20:07:02.058400 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.058407 | orchestrator | 2025-06-02 20:07:02.058414 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-02 20:07:02.058421 | orchestrator | Monday 02 June 2025 20:03:43 +0000 (0:00:01.007) 0:03:01.946 *********** 2025-06-02 20:07:02.058428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 20:07:02.058435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 20:07:02.058484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 20:07:02.058492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058563 | orchestrator | 2025-06-02 20:07:02.058570 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-02 20:07:02.058585 | orchestrator | Monday 02 June 2025 20:03:47 +0000 (0:00:03.512) 0:03:05.458 *********** 2025-06-02 20:07:02.058592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 20:07:02.058604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058625 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.058632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 20:07:02.058654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058680 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.058687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 20:07:02.058694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.058729 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.058736 | orchestrator | 2025-06-02 20:07:02.058743 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-02 20:07:02.058755 | orchestrator | Monday 02 June 2025 20:03:48 +0000 (0:00:00.632) 0:03:06.091 *********** 2025-06-02 20:07:02.058762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:07:02.058768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:07:02.058775 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.058786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:07:02.058798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:07:02.058810 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.058822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:07:02.058834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:07:02.058845 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.058856 | orchestrator | 2025-06-02 20:07:02.058867 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-02 20:07:02.058895 | orchestrator | Monday 02 June 2025 20:03:48 +0000 (0:00:00.819) 0:03:06.911 *********** 2025-06-02 20:07:02.058906 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.058917 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.058926 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.058936 | orchestrator | 2025-06-02 20:07:02.058947 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-02 20:07:02.058957 | orchestrator | Monday 02 June 2025 20:03:50 +0000 (0:00:01.655) 0:03:08.566 *********** 2025-06-02 20:07:02.058967 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.058980 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.058991 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.059002 | orchestrator | 2025-06-02 20:07:02.059013 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-02 20:07:02.059025 | orchestrator | Monday 02 June 2025 20:03:52 +0000 (0:00:01.967) 0:03:10.534 *********** 2025-06-02 20:07:02.059037 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.059048 | orchestrator | 2025-06-02 20:07:02.059060 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-02 20:07:02.059067 | orchestrator | Monday 02 June 2025 20:03:53 +0000 (0:00:01.074) 0:03:11.608 *********** 2025-06-02 20:07:02.059074 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:07:02.059081 | orchestrator | 2025-06-02 20:07:02.059088 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-02 20:07:02.059095 | orchestrator | Monday 02 June 2025 20:03:56 +0000 (0:00:02.583) 0:03:14.192 *********** 2025-06-02 20:07:02.059122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:02.059139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:07:02.059146 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.059153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:02.059161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:07:02.059175 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.059207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:02.059216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:07:02.059223 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.059230 | orchestrator | 2025-06-02 20:07:02.059237 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-02 20:07:02.059243 | orchestrator | Monday 02 June 2025 20:03:59 +0000 (0:00:03.033) 0:03:17.225 *********** 2025-06-02 20:07:02.059254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:02.059277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:07:02.059285 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.059292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:02.059300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:07:02.059311 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.059333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:02.059341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:07:02.059348 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.059355 | orchestrator | 2025-06-02 20:07:02.059362 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-02 20:07:02.059369 | orchestrator | Monday 02 June 2025 20:04:01 +0000 (0:00:02.500) 0:03:19.726 *********** 2025-06-02 20:07:02.059376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:07:02.059384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:07:02.059394 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.059402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:07:02.059409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:07:02.059419 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.059437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:07:02.059445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:07:02.059452 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.059459 | orchestrator | 2025-06-02 20:07:02.059466 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-02 20:07:02.059473 | orchestrator | Monday 02 June 2025 20:04:04 +0000 (0:00:02.708) 0:03:22.434 *********** 2025-06-02 20:07:02.059480 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.059486 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.059493 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.059500 | orchestrator | 2025-06-02 20:07:02.059506 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-02 20:07:02.059513 | orchestrator | Monday 02 June 2025 20:04:06 +0000 (0:00:02.058) 0:03:24.492 *********** 2025-06-02 20:07:02.059520 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.059527 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.059534 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.059541 | orchestrator | 2025-06-02 20:07:02.059547 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-02 20:07:02.059554 | orchestrator | Monday 02 June 2025 20:04:07 +0000 (0:00:01.414) 0:03:25.907 *********** 2025-06-02 20:07:02.059561 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.059567 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.059579 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.059586 | orchestrator | 2025-06-02 20:07:02.059593 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-02 20:07:02.059600 | orchestrator | Monday 02 June 2025 20:04:08 +0000 (0:00:00.314) 0:03:26.222 *********** 2025-06-02 20:07:02.059606 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.059613 | orchestrator | 2025-06-02 20:07:02.059620 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-02 20:07:02.059627 | orchestrator | Monday 02 June 2025 20:04:09 +0000 (0:00:01.092) 0:03:27.314 *********** 2025-06-02 20:07:02.059634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 20:07:02.059642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 20:07:02.059664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 20:07:02.059672 | orchestrator | 2025-06-02 20:07:02.059679 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-02 20:07:02.059685 | orchestrator | Monday 02 June 2025 20:04:11 +0000 (0:00:01.818) 0:03:29.133 *********** 2025-06-02 20:07:02.059692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 20:07:02.059700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 20:07:02.059712 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.059719 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.059726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 20:07:02.059733 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.059740 | orchestrator | 2025-06-02 20:07:02.059747 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-02 20:07:02.059754 | orchestrator | Monday 02 June 2025 20:04:11 +0000 (0:00:00.416) 0:03:29.549 *********** 2025-06-02 20:07:02.059761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 20:07:02.059768 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.059775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 20:07:02.059782 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.059803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 20:07:02.059811 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.059818 | orchestrator | 2025-06-02 20:07:02.059825 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-02 20:07:02.059831 | orchestrator | Monday 02 June 2025 20:04:12 +0000 (0:00:00.595) 0:03:30.145 *********** 2025-06-02 20:07:02.059838 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.059845 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.059852 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.059858 | orchestrator | 2025-06-02 20:07:02.059865 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-02 20:07:02.059918 | orchestrator | Monday 02 June 2025 20:04:12 +0000 (0:00:00.744) 0:03:30.889 *********** 2025-06-02 20:07:02.059925 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.059932 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.059939 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.059945 | orchestrator | 2025-06-02 20:07:02.059952 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-02 20:07:02.059965 | orchestrator | Monday 02 June 2025 20:04:14 +0000 (0:00:01.275) 0:03:32.165 *********** 2025-06-02 20:07:02.059971 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.059979 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.059985 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.059992 | orchestrator | 2025-06-02 20:07:02.059999 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-02 20:07:02.060005 | orchestrator | Monday 02 June 2025 20:04:14 +0000 (0:00:00.310) 0:03:32.475 *********** 2025-06-02 20:07:02.060012 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.060019 | orchestrator | 2025-06-02 20:07:02.060025 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-02 20:07:02.060032 | orchestrator | Monday 02 June 2025 20:04:16 +0000 (0:00:01.490) 0:03:33.965 *********** 2025-06-02 20:07:02.060039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:07:02.060047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:07:02.060100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:07:02.060151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.060195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:07:02.060298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.060355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.060374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:07:02.060419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.060478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:07:02.060559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.060604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.060652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.060699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.060783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.060794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060811 | orchestrator | 2025-06-02 20:07:02.060823 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-02 20:07:02.060833 | orchestrator | Monday 02 June 2025 20:04:20 +0000 (0:00:04.679) 0:03:38.645 *********** 2025-06-02 20:07:02.060865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:07:02.060896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:07:02.060948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.060977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.060990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:07:02.061001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.061082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:07:02.061134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:07:02.061180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.061233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.061259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.061287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061295 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.061301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:07:02.061319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.061383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.061409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.061417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061423 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.061430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:07:02.061455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:07:02.061482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:07:02.061488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.061498 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.061505 | orchestrator | 2025-06-02 20:07:02.061511 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-02 20:07:02.061518 | orchestrator | Monday 02 June 2025 20:04:22 +0000 (0:00:01.636) 0:03:40.281 *********** 2025-06-02 20:07:02.061524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:07:02.061530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:07:02.061537 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.061543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:07:02.061549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:07:02.061555 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.061562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:07:02.061568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:07:02.061574 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.061580 | orchestrator | 2025-06-02 20:07:02.061586 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-02 20:07:02.061593 | orchestrator | Monday 02 June 2025 20:04:24 +0000 (0:00:02.277) 0:03:42.558 *********** 2025-06-02 20:07:02.061599 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.061605 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.061611 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.061617 | orchestrator | 2025-06-02 20:07:02.061623 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-02 20:07:02.061630 | orchestrator | Monday 02 June 2025 20:04:26 +0000 (0:00:01.484) 0:03:44.042 *********** 2025-06-02 20:07:02.061636 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.061642 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.061648 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.061654 | orchestrator | 2025-06-02 20:07:02.061660 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-02 20:07:02.061666 | orchestrator | Monday 02 June 2025 20:04:28 +0000 (0:00:02.001) 0:03:46.044 *********** 2025-06-02 20:07:02.061673 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.061679 | orchestrator | 2025-06-02 20:07:02.061685 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-02 20:07:02.061691 | orchestrator | Monday 02 June 2025 20:04:29 +0000 (0:00:01.192) 0:03:47.237 *********** 2025-06-02 20:07:02.061710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.061722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.061729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.061736 | orchestrator | 2025-06-02 20:07:02.061742 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-02 20:07:02.061748 | orchestrator | Monday 02 June 2025 20:04:32 +0000 (0:00:03.540) 0:03:50.777 *********** 2025-06-02 20:07:02.061754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.061761 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.061780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.061791 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.061798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.061805 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.061811 | orchestrator | 2025-06-02 20:07:02.061817 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-02 20:07:02.061823 | orchestrator | Monday 02 June 2025 20:04:33 +0000 (0:00:00.535) 0:03:51.313 *********** 2025-06-02 20:07:02.061829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:07:02.061836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:07:02.061842 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.061848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:07:02.061855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:07:02.061861 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.061867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:07:02.061889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:07:02.061895 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.061902 | orchestrator | 2025-06-02 20:07:02.061908 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-02 20:07:02.061914 | orchestrator | Monday 02 June 2025 20:04:34 +0000 (0:00:00.777) 0:03:52.091 *********** 2025-06-02 20:07:02.061920 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.061926 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.061933 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.061939 | orchestrator | 2025-06-02 20:07:02.061945 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-02 20:07:02.061951 | orchestrator | Monday 02 June 2025 20:04:35 +0000 (0:00:01.653) 0:03:53.744 *********** 2025-06-02 20:07:02.061958 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.061968 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.061975 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.061981 | orchestrator | 2025-06-02 20:07:02.061987 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-02 20:07:02.061993 | orchestrator | Monday 02 June 2025 20:04:37 +0000 (0:00:02.067) 0:03:55.812 *********** 2025-06-02 20:07:02.062000 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.062006 | orchestrator | 2025-06-02 20:07:02.062012 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-02 20:07:02.062040 | orchestrator | Monday 02 June 2025 20:04:39 +0000 (0:00:01.319) 0:03:57.131 *********** 2025-06-02 20:07:02.062063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.062072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.062109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.062124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062148 | orchestrator | 2025-06-02 20:07:02.062154 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-02 20:07:02.062161 | orchestrator | Monday 02 June 2025 20:04:43 +0000 (0:00:04.246) 0:04:01.378 *********** 2025-06-02 20:07:02.062247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.062269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062283 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.062290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.062302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062319 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.062344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.062353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.062366 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.062372 | orchestrator | 2025-06-02 20:07:02.062379 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-02 20:07:02.062390 | orchestrator | Monday 02 June 2025 20:04:44 +0000 (0:00:01.002) 0:04:02.381 *********** 2025-06-02 20:07:02.062397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062423 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.062430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062467 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.062473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:07:02.062498 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.062505 | orchestrator | 2025-06-02 20:07:02.062511 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-02 20:07:02.062517 | orchestrator | Monday 02 June 2025 20:04:45 +0000 (0:00:00.863) 0:04:03.244 *********** 2025-06-02 20:07:02.062523 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.062529 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.062536 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.062542 | orchestrator | 2025-06-02 20:07:02.062548 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-02 20:07:02.062554 | orchestrator | Monday 02 June 2025 20:04:46 +0000 (0:00:01.698) 0:04:04.943 *********** 2025-06-02 20:07:02.062560 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.062566 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.062577 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.062583 | orchestrator | 2025-06-02 20:07:02.062589 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-02 20:07:02.062595 | orchestrator | Monday 02 June 2025 20:04:48 +0000 (0:00:01.967) 0:04:06.910 *********** 2025-06-02 20:07:02.062601 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.062608 | orchestrator | 2025-06-02 20:07:02.062614 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-02 20:07:02.062620 | orchestrator | Monday 02 June 2025 20:04:50 +0000 (0:00:01.404) 0:04:08.315 *********** 2025-06-02 20:07:02.062626 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-02 20:07:02.062633 | orchestrator | 2025-06-02 20:07:02.062639 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-02 20:07:02.062645 | orchestrator | Monday 02 June 2025 20:04:51 +0000 (0:00:00.847) 0:04:09.162 *********** 2025-06-02 20:07:02.062652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 20:07:02.062659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 20:07:02.062669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 20:07:02.062675 | orchestrator | 2025-06-02 20:07:02.062693 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-02 20:07:02.062700 | orchestrator | Monday 02 June 2025 20:04:54 +0000 (0:00:03.458) 0:04:12.621 *********** 2025-06-02 20:07:02.062707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:07:02.062714 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.062720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:07:02.062731 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.062738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:07:02.062744 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.062750 | orchestrator | 2025-06-02 20:07:02.062757 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-02 20:07:02.062763 | orchestrator | Monday 02 June 2025 20:04:55 +0000 (0:00:01.151) 0:04:13.772 *********** 2025-06-02 20:07:02.062769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:07:02.062776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:07:02.062783 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.062789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:07:02.062795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:07:02.062802 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.062808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:07:02.062815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:07:02.062821 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.062827 | orchestrator | 2025-06-02 20:07:02.062834 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 20:07:02.062840 | orchestrator | Monday 02 June 2025 20:04:57 +0000 (0:00:01.627) 0:04:15.400 *********** 2025-06-02 20:07:02.062846 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.062852 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.062858 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.062864 | orchestrator | 2025-06-02 20:07:02.062884 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 20:07:02.062890 | orchestrator | Monday 02 June 2025 20:04:59 +0000 (0:00:02.142) 0:04:17.542 *********** 2025-06-02 20:07:02.062900 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.062906 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.062912 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.062918 | orchestrator | 2025-06-02 20:07:02.062937 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-02 20:07:02.062944 | orchestrator | Monday 02 June 2025 20:05:02 +0000 (0:00:02.835) 0:04:20.378 *********** 2025-06-02 20:07:02.062951 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-02 20:07:02.062961 | orchestrator | 2025-06-02 20:07:02.062968 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-02 20:07:02.062974 | orchestrator | Monday 02 June 2025 20:05:03 +0000 (0:00:00.924) 0:04:21.303 *********** 2025-06-02 20:07:02.062981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:07:02.062988 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.062995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:07:02.063001 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.063008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:07:02.063014 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.063021 | orchestrator | 2025-06-02 20:07:02.063027 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-02 20:07:02.063034 | orchestrator | Monday 02 June 2025 20:05:04 +0000 (0:00:01.490) 0:04:22.793 *********** 2025-06-02 20:07:02.063040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:07:02.063047 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.063053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:07:02.063060 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.063071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:07:02.063082 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.063088 | orchestrator | 2025-06-02 20:07:02.063105 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-02 20:07:02.063113 | orchestrator | Monday 02 June 2025 20:05:06 +0000 (0:00:01.663) 0:04:24.457 *********** 2025-06-02 20:07:02.063119 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.063125 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.063132 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.063138 | orchestrator | 2025-06-02 20:07:02.063145 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 20:07:02.063151 | orchestrator | Monday 02 June 2025 20:05:07 +0000 (0:00:01.213) 0:04:25.670 *********** 2025-06-02 20:07:02.063157 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.063164 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.063170 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.063176 | orchestrator | 2025-06-02 20:07:02.063183 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 20:07:02.063189 | orchestrator | Monday 02 June 2025 20:05:10 +0000 (0:00:02.518) 0:04:28.188 *********** 2025-06-02 20:07:02.063196 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.063202 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.063208 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.063215 | orchestrator | 2025-06-02 20:07:02.063221 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-02 20:07:02.063227 | orchestrator | Monday 02 June 2025 20:05:13 +0000 (0:00:03.036) 0:04:31.225 *********** 2025-06-02 20:07:02.063234 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-02 20:07:02.063240 | orchestrator | 2025-06-02 20:07:02.063246 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-02 20:07:02.063253 | orchestrator | Monday 02 June 2025 20:05:14 +0000 (0:00:01.094) 0:04:32.320 *********** 2025-06-02 20:07:02.063259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:07:02.063266 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.063272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:07:02.063278 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.063285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:07:02.063296 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.063302 | orchestrator | 2025-06-02 20:07:02.063309 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-02 20:07:02.063315 | orchestrator | Monday 02 June 2025 20:05:15 +0000 (0:00:01.008) 0:04:33.329 *********** 2025-06-02 20:07:02.063322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:07:02.063328 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.063351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:07:02.063359 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.063365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:07:02.063372 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.063378 | orchestrator | 2025-06-02 20:07:02.063384 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-02 20:07:02.063391 | orchestrator | Monday 02 June 2025 20:05:16 +0000 (0:00:01.247) 0:04:34.576 *********** 2025-06-02 20:07:02.063397 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.063403 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.063409 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.063416 | orchestrator | 2025-06-02 20:07:02.063422 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 20:07:02.063428 | orchestrator | Monday 02 June 2025 20:05:18 +0000 (0:00:01.684) 0:04:36.261 *********** 2025-06-02 20:07:02.063435 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.063441 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.063447 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.063454 | orchestrator | 2025-06-02 20:07:02.063460 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 20:07:02.063466 | orchestrator | Monday 02 June 2025 20:05:20 +0000 (0:00:02.390) 0:04:38.652 *********** 2025-06-02 20:07:02.063472 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.063479 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.063485 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.063491 | orchestrator | 2025-06-02 20:07:02.063497 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-02 20:07:02.063504 | orchestrator | Monday 02 June 2025 20:05:23 +0000 (0:00:03.180) 0:04:41.832 *********** 2025-06-02 20:07:02.063510 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.063517 | orchestrator | 2025-06-02 20:07:02.063523 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-02 20:07:02.063534 | orchestrator | Monday 02 June 2025 20:05:25 +0000 (0:00:01.336) 0:04:43.169 *********** 2025-06-02 20:07:02.063541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.063548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:07:02.063570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.063577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:07:02.063602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.063609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.063643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.063650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:07:02.063661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.063681 | orchestrator | 2025-06-02 20:07:02.063687 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-02 20:07:02.063694 | orchestrator | Monday 02 June 2025 20:05:29 +0000 (0:00:03.911) 0:04:47.081 *********** 2025-06-02 20:07:02.063715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.063723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:07:02.063729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.063753 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.063760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.063781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:07:02.063788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.063812 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.063819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.063825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:07:02.063846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:07:02.063860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:07:02.063909 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.063917 | orchestrator | 2025-06-02 20:07:02.063924 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-02 20:07:02.063930 | orchestrator | Monday 02 June 2025 20:05:29 +0000 (0:00:00.723) 0:04:47.805 *********** 2025-06-02 20:07:02.063937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:07:02.063943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:07:02.063950 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.063956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:07:02.063962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:07:02.063969 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.063975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:07:02.063981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:07:02.063988 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.063994 | orchestrator | 2025-06-02 20:07:02.064001 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-02 20:07:02.064007 | orchestrator | Monday 02 June 2025 20:05:30 +0000 (0:00:00.906) 0:04:48.711 *********** 2025-06-02 20:07:02.064013 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.064019 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.064025 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.064031 | orchestrator | 2025-06-02 20:07:02.064038 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-02 20:07:02.064044 | orchestrator | Monday 02 June 2025 20:05:32 +0000 (0:00:01.804) 0:04:50.515 *********** 2025-06-02 20:07:02.064050 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.064056 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.064063 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.064069 | orchestrator | 2025-06-02 20:07:02.064075 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-02 20:07:02.064081 | orchestrator | Monday 02 June 2025 20:05:34 +0000 (0:00:02.087) 0:04:52.603 *********** 2025-06-02 20:07:02.064087 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.064094 | orchestrator | 2025-06-02 20:07:02.064100 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-02 20:07:02.064106 | orchestrator | Monday 02 June 2025 20:05:36 +0000 (0:00:01.397) 0:04:54.001 *********** 2025-06-02 20:07:02.064128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:02.064140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:02.064146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:02.064154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:02.064174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:02.064188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:02.064194 | orchestrator | 2025-06-02 20:07:02.064199 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-02 20:07:02.064205 | orchestrator | Monday 02 June 2025 20:05:41 +0000 (0:00:05.399) 0:04:59.400 *********** 2025-06-02 20:07:02.064210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:07:02.064216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:07:02.064222 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.064241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:07:02.064252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:07:02.064258 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.064264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:07:02.064270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:07:02.064275 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.064281 | orchestrator | 2025-06-02 20:07:02.064287 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-02 20:07:02.064292 | orchestrator | Monday 02 June 2025 20:05:42 +0000 (0:00:00.988) 0:05:00.388 *********** 2025-06-02 20:07:02.064298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 20:07:02.064307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:07:02.064331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:07:02.064338 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.064343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 20:07:02.064349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:07:02.064355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:07:02.064360 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.064366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 20:07:02.064371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:07:02.064377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:07:02.064383 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.064389 | orchestrator | 2025-06-02 20:07:02.064394 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-02 20:07:02.064400 | orchestrator | Monday 02 June 2025 20:05:43 +0000 (0:00:00.866) 0:05:01.254 *********** 2025-06-02 20:07:02.064405 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.064411 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.064416 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.064422 | orchestrator | 2025-06-02 20:07:02.064427 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-02 20:07:02.064433 | orchestrator | Monday 02 June 2025 20:05:43 +0000 (0:00:00.404) 0:05:01.659 *********** 2025-06-02 20:07:02.064438 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.064444 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.064449 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.064455 | orchestrator | 2025-06-02 20:07:02.064460 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-02 20:07:02.064466 | orchestrator | Monday 02 June 2025 20:05:45 +0000 (0:00:01.402) 0:05:03.061 *********** 2025-06-02 20:07:02.064471 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.064477 | orchestrator | 2025-06-02 20:07:02.064482 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-02 20:07:02.064488 | orchestrator | Monday 02 June 2025 20:05:46 +0000 (0:00:01.693) 0:05:04.754 *********** 2025-06-02 20:07:02.064494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:07:02.064503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:07:02.064522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:07:02.064547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:07:02.064556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:07:02.064562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:07:02.064587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:07:02.064632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:07:02.064639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:07:02.064666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:07:02.064678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:07:02.064685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:07:02.064691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064736 | orchestrator | 2025-06-02 20:07:02.064742 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-02 20:07:02.064748 | orchestrator | Monday 02 June 2025 20:05:50 +0000 (0:00:04.132) 0:05:08.887 *********** 2025-06-02 20:07:02.064753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 20:07:02.064759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:07:02.064770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 20:07:02.064801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:07:02.064807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064827 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.064833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 20:07:02.064844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:07:02.064851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 20:07:02.064892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:07:02.064898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064921 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.064927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 20:07:02.064938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:07:02.064944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.064968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 20:07:02.064975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:07:02.064984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:07:02.064996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:07:02.065001 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065007 | orchestrator | 2025-06-02 20:07:02.065013 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-02 20:07:02.065019 | orchestrator | Monday 02 June 2025 20:05:52 +0000 (0:00:01.217) 0:05:10.105 *********** 2025-06-02 20:07:02.065024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 20:07:02.065030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 20:07:02.065035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:07:02.065047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:07:02.065054 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 20:07:02.065066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 20:07:02.065075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:07:02.065081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:07:02.065086 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 20:07:02.065097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 20:07:02.065103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:07:02.065112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:07:02.065118 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065123 | orchestrator | 2025-06-02 20:07:02.065129 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-02 20:07:02.065134 | orchestrator | Monday 02 June 2025 20:05:53 +0000 (0:00:00.968) 0:05:11.074 *********** 2025-06-02 20:07:02.065140 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065146 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065151 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065157 | orchestrator | 2025-06-02 20:07:02.065162 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-02 20:07:02.065168 | orchestrator | Monday 02 June 2025 20:05:53 +0000 (0:00:00.436) 0:05:11.510 *********** 2025-06-02 20:07:02.065173 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065179 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065184 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065190 | orchestrator | 2025-06-02 20:07:02.065195 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-02 20:07:02.065201 | orchestrator | Monday 02 June 2025 20:05:54 +0000 (0:00:01.385) 0:05:12.896 *********** 2025-06-02 20:07:02.065206 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.065211 | orchestrator | 2025-06-02 20:07:02.065217 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-02 20:07:02.065222 | orchestrator | Monday 02 June 2025 20:05:56 +0000 (0:00:01.724) 0:05:14.621 *********** 2025-06-02 20:07:02.065234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:07:02.065245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:07:02.065251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:07:02.065257 | orchestrator | 2025-06-02 20:07:02.065263 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-02 20:07:02.065268 | orchestrator | Monday 02 June 2025 20:05:59 +0000 (0:00:02.374) 0:05:16.995 *********** 2025-06-02 20:07:02.065274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 20:07:02.065286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 20:07:02.065297 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065302 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 20:07:02.065314 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065319 | orchestrator | 2025-06-02 20:07:02.065325 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-02 20:07:02.065330 | orchestrator | Monday 02 June 2025 20:05:59 +0000 (0:00:00.413) 0:05:17.409 *********** 2025-06-02 20:07:02.065336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 20:07:02.065342 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 20:07:02.065353 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 20:07:02.065364 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065369 | orchestrator | 2025-06-02 20:07:02.065375 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-02 20:07:02.065380 | orchestrator | Monday 02 June 2025 20:06:00 +0000 (0:00:01.045) 0:05:18.454 *********** 2025-06-02 20:07:02.065386 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065391 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065397 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065402 | orchestrator | 2025-06-02 20:07:02.065408 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-02 20:07:02.065414 | orchestrator | Monday 02 June 2025 20:06:01 +0000 (0:00:00.498) 0:05:18.952 *********** 2025-06-02 20:07:02.065419 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065425 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065430 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065436 | orchestrator | 2025-06-02 20:07:02.065441 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-02 20:07:02.065447 | orchestrator | Monday 02 June 2025 20:06:02 +0000 (0:00:01.291) 0:05:20.244 *********** 2025-06-02 20:07:02.065457 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:02.065463 | orchestrator | 2025-06-02 20:07:02.065468 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-02 20:07:02.065473 | orchestrator | Monday 02 June 2025 20:06:04 +0000 (0:00:01.860) 0:05:22.105 *********** 2025-06-02 20:07:02.065479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.065491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.065498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.065504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.065515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.065527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 20:07:02.065533 | orchestrator | 2025-06-02 20:07:02.065538 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-02 20:07:02.065544 | orchestrator | Monday 02 June 2025 20:06:10 +0000 (0:00:06.507) 0:05:28.613 *********** 2025-06-02 20:07:02.065550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.065556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.065565 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.065593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.065600 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.065612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 20:07:02.065618 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065627 | orchestrator | 2025-06-02 20:07:02.065632 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-02 20:07:02.065638 | orchestrator | Monday 02 June 2025 20:06:11 +0000 (0:00:00.665) 0:05:29.278 *********** 2025-06-02 20:07:02.065643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065666 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065697 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:07:02.065728 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065734 | orchestrator | 2025-06-02 20:07:02.065739 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-02 20:07:02.065745 | orchestrator | Monday 02 June 2025 20:06:13 +0000 (0:00:01.733) 0:05:31.012 *********** 2025-06-02 20:07:02.065750 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.065756 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.065761 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.065767 | orchestrator | 2025-06-02 20:07:02.065772 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-02 20:07:02.065781 | orchestrator | Monday 02 June 2025 20:06:14 +0000 (0:00:01.352) 0:05:32.365 *********** 2025-06-02 20:07:02.065787 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.065792 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.065798 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.065803 | orchestrator | 2025-06-02 20:07:02.065809 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-02 20:07:02.065814 | orchestrator | Monday 02 June 2025 20:06:16 +0000 (0:00:02.221) 0:05:34.587 *********** 2025-06-02 20:07:02.065819 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065825 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065830 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065836 | orchestrator | 2025-06-02 20:07:02.065841 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-02 20:07:02.065847 | orchestrator | Monday 02 June 2025 20:06:16 +0000 (0:00:00.319) 0:05:34.906 *********** 2025-06-02 20:07:02.065852 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065857 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065863 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065899 | orchestrator | 2025-06-02 20:07:02.065906 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-02 20:07:02.065912 | orchestrator | Monday 02 June 2025 20:06:17 +0000 (0:00:00.607) 0:05:35.513 *********** 2025-06-02 20:07:02.065917 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065923 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065928 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065934 | orchestrator | 2025-06-02 20:07:02.065939 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-02 20:07:02.065944 | orchestrator | Monday 02 June 2025 20:06:17 +0000 (0:00:00.315) 0:05:35.829 *********** 2025-06-02 20:07:02.065950 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065955 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065961 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065966 | orchestrator | 2025-06-02 20:07:02.065972 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-02 20:07:02.065977 | orchestrator | Monday 02 June 2025 20:06:18 +0000 (0:00:00.322) 0:05:36.151 *********** 2025-06-02 20:07:02.065982 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.065988 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.065993 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.065999 | orchestrator | 2025-06-02 20:07:02.066004 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-02 20:07:02.066009 | orchestrator | Monday 02 June 2025 20:06:18 +0000 (0:00:00.298) 0:05:36.449 *********** 2025-06-02 20:07:02.066035 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.066041 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.066046 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.066054 | orchestrator | 2025-06-02 20:07:02.066060 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-02 20:07:02.066065 | orchestrator | Monday 02 June 2025 20:06:19 +0000 (0:00:00.836) 0:05:37.286 *********** 2025-06-02 20:07:02.066071 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.066076 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.066082 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.066087 | orchestrator | 2025-06-02 20:07:02.066093 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-02 20:07:02.066098 | orchestrator | Monday 02 June 2025 20:06:20 +0000 (0:00:00.710) 0:05:37.996 *********** 2025-06-02 20:07:02.066104 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.066109 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.066115 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.066120 | orchestrator | 2025-06-02 20:07:02.066126 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-02 20:07:02.066132 | orchestrator | Monday 02 June 2025 20:06:20 +0000 (0:00:00.345) 0:05:38.342 *********** 2025-06-02 20:07:02.066141 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.066147 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.066152 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.066158 | orchestrator | 2025-06-02 20:07:02.066163 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-02 20:07:02.066172 | orchestrator | Monday 02 June 2025 20:06:21 +0000 (0:00:01.186) 0:05:39.528 *********** 2025-06-02 20:07:02.066178 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.066184 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.066193 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.066198 | orchestrator | 2025-06-02 20:07:02.066203 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-02 20:07:02.066208 | orchestrator | Monday 02 June 2025 20:06:22 +0000 (0:00:00.921) 0:05:40.450 *********** 2025-06-02 20:07:02.066213 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.066218 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.066223 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.066228 | orchestrator | 2025-06-02 20:07:02.066233 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-02 20:07:02.066238 | orchestrator | Monday 02 June 2025 20:06:23 +0000 (0:00:00.895) 0:05:41.345 *********** 2025-06-02 20:07:02.066243 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.066248 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.066253 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.066261 | orchestrator | 2025-06-02 20:07:02.066269 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-02 20:07:02.066278 | orchestrator | Monday 02 June 2025 20:06:32 +0000 (0:00:09.601) 0:05:50.946 *********** 2025-06-02 20:07:02.066290 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.066302 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.066310 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.066318 | orchestrator | 2025-06-02 20:07:02.066326 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-02 20:07:02.066333 | orchestrator | Monday 02 June 2025 20:06:34 +0000 (0:00:01.467) 0:05:52.414 *********** 2025-06-02 20:07:02.066340 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.066348 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.066356 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.066365 | orchestrator | 2025-06-02 20:07:02.066372 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-02 20:07:02.066377 | orchestrator | Monday 02 June 2025 20:06:42 +0000 (0:00:08.401) 0:06:00.815 *********** 2025-06-02 20:07:02.066382 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.066387 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.066392 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.066397 | orchestrator | 2025-06-02 20:07:02.066402 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-02 20:07:02.066407 | orchestrator | Monday 02 June 2025 20:06:46 +0000 (0:00:03.885) 0:06:04.701 *********** 2025-06-02 20:07:02.066412 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:02.066416 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:02.066422 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:02.066426 | orchestrator | 2025-06-02 20:07:02.066432 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-02 20:07:02.066437 | orchestrator | Monday 02 June 2025 20:06:56 +0000 (0:00:09.485) 0:06:14.186 *********** 2025-06-02 20:07:02.066442 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.066446 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.066451 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.066456 | orchestrator | 2025-06-02 20:07:02.066461 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-02 20:07:02.066466 | orchestrator | Monday 02 June 2025 20:06:56 +0000 (0:00:00.678) 0:06:14.865 *********** 2025-06-02 20:07:02.066471 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.066476 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.066486 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.066491 | orchestrator | 2025-06-02 20:07:02.066496 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-02 20:07:02.066500 | orchestrator | Monday 02 June 2025 20:06:57 +0000 (0:00:00.343) 0:06:15.208 *********** 2025-06-02 20:07:02.066505 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.066510 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.066515 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.066520 | orchestrator | 2025-06-02 20:07:02.066525 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-02 20:07:02.066530 | orchestrator | Monday 02 June 2025 20:06:57 +0000 (0:00:00.343) 0:06:15.552 *********** 2025-06-02 20:07:02.066535 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.066540 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.066545 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.066549 | orchestrator | 2025-06-02 20:07:02.066554 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-02 20:07:02.066559 | orchestrator | Monday 02 June 2025 20:06:57 +0000 (0:00:00.324) 0:06:15.877 *********** 2025-06-02 20:07:02.066564 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.066569 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.066574 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.066579 | orchestrator | 2025-06-02 20:07:02.066584 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-02 20:07:02.066589 | orchestrator | Monday 02 June 2025 20:06:58 +0000 (0:00:00.341) 0:06:16.218 *********** 2025-06-02 20:07:02.066594 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:02.066599 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:02.066604 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:02.066609 | orchestrator | 2025-06-02 20:07:02.066614 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-02 20:07:02.066619 | orchestrator | Monday 02 June 2025 20:06:58 +0000 (0:00:00.674) 0:06:16.893 *********** 2025-06-02 20:07:02.066624 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.066629 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.066634 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.066639 | orchestrator | 2025-06-02 20:07:02.066644 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-02 20:07:02.066649 | orchestrator | Monday 02 June 2025 20:06:59 +0000 (0:00:00.876) 0:06:17.770 *********** 2025-06-02 20:07:02.066654 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:02.066659 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:02.066664 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:02.066669 | orchestrator | 2025-06-02 20:07:02.066674 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:07:02.066682 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 20:07:02.066692 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 20:07:02.066697 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 20:07:02.066702 | orchestrator | 2025-06-02 20:07:02.066707 | orchestrator | 2025-06-02 20:07:02.066712 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:07:02.066717 | orchestrator | Monday 02 June 2025 20:07:00 +0000 (0:00:00.807) 0:06:18.577 *********** 2025-06-02 20:07:02.066724 | orchestrator | =============================================================================== 2025-06-02 20:07:02.066732 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.60s 2025-06-02 20:07:02.066744 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.49s 2025-06-02 20:07:02.066758 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.40s 2025-06-02 20:07:02.066765 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.51s 2025-06-02 20:07:02.066773 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.70s 2025-06-02 20:07:02.066780 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.40s 2025-06-02 20:07:02.066788 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 5.38s 2025-06-02 20:07:02.066794 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.77s 2025-06-02 20:07:02.066802 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.68s 2025-06-02 20:07:02.066810 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.66s 2025-06-02 20:07:02.066817 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.57s 2025-06-02 20:07:02.066825 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.36s 2025-06-02 20:07:02.066833 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.27s 2025-06-02 20:07:02.066840 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.25s 2025-06-02 20:07:02.066848 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.13s 2025-06-02 20:07:02.066856 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.91s 2025-06-02 20:07:02.066864 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.89s 2025-06-02 20:07:02.066889 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.82s 2025-06-02 20:07:02.066894 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 3.77s 2025-06-02 20:07:02.066899 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.74s 2025-06-02 20:07:02.066904 | orchestrator | 2025-06-02 20:07:02 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:02.066909 | orchestrator | 2025-06-02 20:07:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:05.101174 | orchestrator | 2025-06-02 20:07:05 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:05.101298 | orchestrator | 2025-06-02 20:07:05 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:05.101980 | orchestrator | 2025-06-02 20:07:05 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:05.102008 | orchestrator | 2025-06-02 20:07:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:08.133775 | orchestrator | 2025-06-02 20:07:08 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:08.136796 | orchestrator | 2025-06-02 20:07:08 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:08.138499 | orchestrator | 2025-06-02 20:07:08 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:08.138556 | orchestrator | 2025-06-02 20:07:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:11.182741 | orchestrator | 2025-06-02 20:07:11 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:11.184031 | orchestrator | 2025-06-02 20:07:11 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:11.186259 | orchestrator | 2025-06-02 20:07:11 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:11.186609 | orchestrator | 2025-06-02 20:07:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:14.228076 | orchestrator | 2025-06-02 20:07:14 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:14.228211 | orchestrator | 2025-06-02 20:07:14 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:14.230234 | orchestrator | 2025-06-02 20:07:14 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:14.230291 | orchestrator | 2025-06-02 20:07:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:17.268473 | orchestrator | 2025-06-02 20:07:17 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:17.269223 | orchestrator | 2025-06-02 20:07:17 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:17.273382 | orchestrator | 2025-06-02 20:07:17 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:17.273440 | orchestrator | 2025-06-02 20:07:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:20.312795 | orchestrator | 2025-06-02 20:07:20 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:20.315302 | orchestrator | 2025-06-02 20:07:20 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:20.315652 | orchestrator | 2025-06-02 20:07:20 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:20.316150 | orchestrator | 2025-06-02 20:07:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:23.351031 | orchestrator | 2025-06-02 20:07:23 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:23.351331 | orchestrator | 2025-06-02 20:07:23 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:23.352068 | orchestrator | 2025-06-02 20:07:23 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:23.356429 | orchestrator | 2025-06-02 20:07:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:26.399767 | orchestrator | 2025-06-02 20:07:26 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:26.411123 | orchestrator | 2025-06-02 20:07:26 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:26.415732 | orchestrator | 2025-06-02 20:07:26 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:26.415824 | orchestrator | 2025-06-02 20:07:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:29.449090 | orchestrator | 2025-06-02 20:07:29 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:29.450311 | orchestrator | 2025-06-02 20:07:29 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:29.452779 | orchestrator | 2025-06-02 20:07:29 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:29.452933 | orchestrator | 2025-06-02 20:07:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:32.499544 | orchestrator | 2025-06-02 20:07:32 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:32.503001 | orchestrator | 2025-06-02 20:07:32 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:32.503869 | orchestrator | 2025-06-02 20:07:32 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:32.504026 | orchestrator | 2025-06-02 20:07:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:35.571786 | orchestrator | 2025-06-02 20:07:35 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:35.576419 | orchestrator | 2025-06-02 20:07:35 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:35.581296 | orchestrator | 2025-06-02 20:07:35 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:35.581352 | orchestrator | 2025-06-02 20:07:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:38.626741 | orchestrator | 2025-06-02 20:07:38 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:38.627029 | orchestrator | 2025-06-02 20:07:38 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:38.632317 | orchestrator | 2025-06-02 20:07:38 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:38.632392 | orchestrator | 2025-06-02 20:07:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:41.685119 | orchestrator | 2025-06-02 20:07:41 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:41.687326 | orchestrator | 2025-06-02 20:07:41 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:41.690477 | orchestrator | 2025-06-02 20:07:41 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:41.691132 | orchestrator | 2025-06-02 20:07:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:44.733158 | orchestrator | 2025-06-02 20:07:44 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:44.735011 | orchestrator | 2025-06-02 20:07:44 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:44.737329 | orchestrator | 2025-06-02 20:07:44 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:44.737423 | orchestrator | 2025-06-02 20:07:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:47.778488 | orchestrator | 2025-06-02 20:07:47 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:47.779970 | orchestrator | 2025-06-02 20:07:47 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:47.782588 | orchestrator | 2025-06-02 20:07:47 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:47.782677 | orchestrator | 2025-06-02 20:07:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:50.821159 | orchestrator | 2025-06-02 20:07:50 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:50.824588 | orchestrator | 2025-06-02 20:07:50 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:50.825292 | orchestrator | 2025-06-02 20:07:50 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:50.825318 | orchestrator | 2025-06-02 20:07:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:53.879704 | orchestrator | 2025-06-02 20:07:53 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:53.881695 | orchestrator | 2025-06-02 20:07:53 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:53.884202 | orchestrator | 2025-06-02 20:07:53 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:53.884265 | orchestrator | 2025-06-02 20:07:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:56.931679 | orchestrator | 2025-06-02 20:07:56 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:56.932287 | orchestrator | 2025-06-02 20:07:56 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:56.933458 | orchestrator | 2025-06-02 20:07:56 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:56.933509 | orchestrator | 2025-06-02 20:07:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:59.982904 | orchestrator | 2025-06-02 20:07:59 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:07:59.983752 | orchestrator | 2025-06-02 20:07:59 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:07:59.985761 | orchestrator | 2025-06-02 20:07:59 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:07:59.985846 | orchestrator | 2025-06-02 20:07:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:03.030214 | orchestrator | 2025-06-02 20:08:03 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:03.031543 | orchestrator | 2025-06-02 20:08:03 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:03.033594 | orchestrator | 2025-06-02 20:08:03 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:03.033624 | orchestrator | 2025-06-02 20:08:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:06.074287 | orchestrator | 2025-06-02 20:08:06 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:06.076263 | orchestrator | 2025-06-02 20:08:06 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:06.077570 | orchestrator | 2025-06-02 20:08:06 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:06.077865 | orchestrator | 2025-06-02 20:08:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:09.118165 | orchestrator | 2025-06-02 20:08:09 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:09.118498 | orchestrator | 2025-06-02 20:08:09 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:09.120453 | orchestrator | 2025-06-02 20:08:09 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:09.120549 | orchestrator | 2025-06-02 20:08:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:12.168727 | orchestrator | 2025-06-02 20:08:12 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:12.170230 | orchestrator | 2025-06-02 20:08:12 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:12.172576 | orchestrator | 2025-06-02 20:08:12 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:12.172628 | orchestrator | 2025-06-02 20:08:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:15.215990 | orchestrator | 2025-06-02 20:08:15 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:15.218324 | orchestrator | 2025-06-02 20:08:15 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:15.221719 | orchestrator | 2025-06-02 20:08:15 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:15.221866 | orchestrator | 2025-06-02 20:08:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:18.259244 | orchestrator | 2025-06-02 20:08:18 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:18.261999 | orchestrator | 2025-06-02 20:08:18 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:18.264523 | orchestrator | 2025-06-02 20:08:18 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:18.264716 | orchestrator | 2025-06-02 20:08:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:21.310394 | orchestrator | 2025-06-02 20:08:21 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:21.311704 | orchestrator | 2025-06-02 20:08:21 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:21.314877 | orchestrator | 2025-06-02 20:08:21 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:21.315039 | orchestrator | 2025-06-02 20:08:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:24.369427 | orchestrator | 2025-06-02 20:08:24 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:24.371889 | orchestrator | 2025-06-02 20:08:24 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:24.373217 | orchestrator | 2025-06-02 20:08:24 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:24.373271 | orchestrator | 2025-06-02 20:08:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:27.425844 | orchestrator | 2025-06-02 20:08:27 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:27.427833 | orchestrator | 2025-06-02 20:08:27 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:27.430246 | orchestrator | 2025-06-02 20:08:27 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:27.430300 | orchestrator | 2025-06-02 20:08:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:30.475444 | orchestrator | 2025-06-02 20:08:30 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:30.477530 | orchestrator | 2025-06-02 20:08:30 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:30.479501 | orchestrator | 2025-06-02 20:08:30 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:30.479653 | orchestrator | 2025-06-02 20:08:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:33.514466 | orchestrator | 2025-06-02 20:08:33 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:33.515127 | orchestrator | 2025-06-02 20:08:33 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:33.516379 | orchestrator | 2025-06-02 20:08:33 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:33.516491 | orchestrator | 2025-06-02 20:08:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:36.557838 | orchestrator | 2025-06-02 20:08:36 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:36.559693 | orchestrator | 2025-06-02 20:08:36 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:36.561284 | orchestrator | 2025-06-02 20:08:36 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:36.561337 | orchestrator | 2025-06-02 20:08:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:39.607171 | orchestrator | 2025-06-02 20:08:39 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:39.610255 | orchestrator | 2025-06-02 20:08:39 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:39.612415 | orchestrator | 2025-06-02 20:08:39 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:39.612472 | orchestrator | 2025-06-02 20:08:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:42.664367 | orchestrator | 2025-06-02 20:08:42 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:42.666704 | orchestrator | 2025-06-02 20:08:42 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state STARTED 2025-06-02 20:08:42.668410 | orchestrator | 2025-06-02 20:08:42 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:42.668470 | orchestrator | 2025-06-02 20:08:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:45.722194 | orchestrator | 2025-06-02 20:08:45 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:45.724006 | orchestrator | 2025-06-02 20:08:45 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:08:45.730705 | orchestrator | 2025-06-02 20:08:45 | INFO  | Task 620b0787-61ce-4ed5-8583-8ae3717560ee is in state SUCCESS 2025-06-02 20:08:45.733403 | orchestrator | 2025-06-02 20:08:45.733471 | orchestrator | 2025-06-02 20:08:45.733479 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-02 20:08:45.733488 | orchestrator | 2025-06-02 20:08:45.733494 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 20:08:45.733501 | orchestrator | Monday 02 June 2025 19:58:12 +0000 (0:00:00.688) 0:00:00.688 *********** 2025-06-02 20:08:45.733509 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.733517 | orchestrator | 2025-06-02 20:08:45.733524 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 20:08:45.733531 | orchestrator | Monday 02 June 2025 19:58:13 +0000 (0:00:01.201) 0:00:01.889 *********** 2025-06-02 20:08:45.733538 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.733546 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.733552 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.733559 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.733565 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.733572 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.733576 | orchestrator | 2025-06-02 20:08:45.733580 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 20:08:45.733584 | orchestrator | Monday 02 June 2025 19:58:15 +0000 (0:00:01.698) 0:00:03.588 *********** 2025-06-02 20:08:45.733588 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.733592 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.733595 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.733600 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.733603 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.733608 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.733614 | orchestrator | 2025-06-02 20:08:45.733620 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 20:08:45.733626 | orchestrator | Monday 02 June 2025 19:58:16 +0000 (0:00:00.921) 0:00:04.509 *********** 2025-06-02 20:08:45.733633 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.733640 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.733646 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.733652 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.733659 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.733665 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.733671 | orchestrator | 2025-06-02 20:08:45.733677 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 20:08:45.733684 | orchestrator | Monday 02 June 2025 19:58:17 +0000 (0:00:01.127) 0:00:05.637 *********** 2025-06-02 20:08:45.733691 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.733697 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.733703 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.733710 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.733716 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.733744 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.733851 | orchestrator | 2025-06-02 20:08:45.733856 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 20:08:45.733860 | orchestrator | Monday 02 June 2025 19:58:18 +0000 (0:00:00.783) 0:00:06.421 *********** 2025-06-02 20:08:45.733864 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.733867 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.733871 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.733875 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.733879 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.733882 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.733886 | orchestrator | 2025-06-02 20:08:45.733890 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 20:08:45.733894 | orchestrator | Monday 02 June 2025 19:58:19 +0000 (0:00:00.976) 0:00:07.397 *********** 2025-06-02 20:08:45.733898 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.733902 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.733906 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.733912 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.733919 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.733926 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.733932 | orchestrator | 2025-06-02 20:08:45.733951 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 20:08:45.733958 | orchestrator | Monday 02 June 2025 19:58:19 +0000 (0:00:00.880) 0:00:08.278 *********** 2025-06-02 20:08:45.733964 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.733971 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.733977 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.733983 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.733990 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.733995 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.734002 | orchestrator | 2025-06-02 20:08:45.734008 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 20:08:45.734054 | orchestrator | Monday 02 June 2025 19:58:20 +0000 (0:00:00.722) 0:00:09.000 *********** 2025-06-02 20:08:45.734064 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.734072 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.734079 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.734085 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.734091 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.734098 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.734105 | orchestrator | 2025-06-02 20:08:45.734111 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 20:08:45.734118 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:00.986) 0:00:09.987 *********** 2025-06-02 20:08:45.734168 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:08:45.734176 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:08:45.734183 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:08:45.734190 | orchestrator | 2025-06-02 20:08:45.734197 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 20:08:45.734204 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:00.640) 0:00:10.627 *********** 2025-06-02 20:08:45.734211 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.734217 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.734224 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.734232 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.734238 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.734245 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.734252 | orchestrator | 2025-06-02 20:08:45.734277 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 20:08:45.734284 | orchestrator | Monday 02 June 2025 19:58:23 +0000 (0:00:01.320) 0:00:11.948 *********** 2025-06-02 20:08:45.734291 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:08:45.734307 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:08:45.734314 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:08:45.734321 | orchestrator | 2025-06-02 20:08:45.734328 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 20:08:45.734335 | orchestrator | Monday 02 June 2025 19:58:26 +0000 (0:00:03.138) 0:00:15.087 *********** 2025-06-02 20:08:45.734342 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:08:45.734350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:08:45.734358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:08:45.734365 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.734372 | orchestrator | 2025-06-02 20:08:45.734379 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 20:08:45.734386 | orchestrator | Monday 02 June 2025 19:58:27 +0000 (0:00:01.216) 0:00:16.303 *********** 2025-06-02 20:08:45.734394 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.734404 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.734411 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.734418 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.734425 | orchestrator | 2025-06-02 20:08:45.734432 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 20:08:45.734439 | orchestrator | Monday 02 June 2025 19:58:28 +0000 (0:00:00.973) 0:00:17.277 *********** 2025-06-02 20:08:45.734447 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.734462 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.734469 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.734476 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.734483 | orchestrator | 2025-06-02 20:08:45.734542 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 20:08:45.734551 | orchestrator | Monday 02 June 2025 19:58:29 +0000 (0:00:00.449) 0:00:17.726 *********** 2025-06-02 20:08:45.734559 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 19:58:24.279238', 'end': '2025-06-02 19:58:24.533457', 'delta': '0:00:00.254219', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.734582 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 19:58:25.361605', 'end': '2025-06-02 19:58:25.640903', 'delta': '0:00:00.279298', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.734590 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 19:58:26.263973', 'end': '2025-06-02 19:58:26.535463', 'delta': '0:00:00.271490', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.734597 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.734603 | orchestrator | 2025-06-02 20:08:45.734610 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 20:08:45.734616 | orchestrator | Monday 02 June 2025 19:58:29 +0000 (0:00:00.243) 0:00:17.969 *********** 2025-06-02 20:08:45.734623 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.734629 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.734636 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.734643 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.734649 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.734656 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.734662 | orchestrator | 2025-06-02 20:08:45.734669 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 20:08:45.734675 | orchestrator | Monday 02 June 2025 19:58:32 +0000 (0:00:02.426) 0:00:20.396 *********** 2025-06-02 20:08:45.734682 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.734688 | orchestrator | 2025-06-02 20:08:45.734694 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 20:08:45.734700 | orchestrator | Monday 02 June 2025 19:58:32 +0000 (0:00:00.878) 0:00:21.274 *********** 2025-06-02 20:08:45.734707 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.734713 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.734719 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.734726 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.734732 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.734738 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.734745 | orchestrator | 2025-06-02 20:08:45.734793 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 20:08:45.734800 | orchestrator | Monday 02 June 2025 19:58:33 +0000 (0:00:00.933) 0:00:22.208 *********** 2025-06-02 20:08:45.734806 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.734818 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.734831 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.734837 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.734844 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.734850 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.734856 | orchestrator | 2025-06-02 20:08:45.734863 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 20:08:45.734870 | orchestrator | Monday 02 June 2025 19:58:35 +0000 (0:00:01.683) 0:00:23.892 *********** 2025-06-02 20:08:45.734876 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.734943 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.734951 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.734958 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.734965 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.734971 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.734977 | orchestrator | 2025-06-02 20:08:45.734983 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 20:08:45.734989 | orchestrator | Monday 02 June 2025 19:58:36 +0000 (0:00:01.443) 0:00:25.335 *********** 2025-06-02 20:08:45.734995 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.735001 | orchestrator | 2025-06-02 20:08:45.735007 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 20:08:45.735013 | orchestrator | Monday 02 June 2025 19:58:37 +0000 (0:00:00.160) 0:00:25.496 *********** 2025-06-02 20:08:45.735019 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.735042 | orchestrator | 2025-06-02 20:08:45.735048 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 20:08:45.735055 | orchestrator | Monday 02 June 2025 19:58:37 +0000 (0:00:00.254) 0:00:25.751 *********** 2025-06-02 20:08:45.735062 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.735068 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.735075 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.735082 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.735088 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.735117 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.735124 | orchestrator | 2025-06-02 20:08:45.735131 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 20:08:45.735144 | orchestrator | Monday 02 June 2025 19:58:38 +0000 (0:00:00.783) 0:00:26.534 *********** 2025-06-02 20:08:45.735151 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.735157 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.735164 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.735170 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.735176 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.735182 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.735188 | orchestrator | 2025-06-02 20:08:45.735195 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 20:08:45.735201 | orchestrator | Monday 02 June 2025 19:58:39 +0000 (0:00:01.053) 0:00:27.588 *********** 2025-06-02 20:08:45.735208 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.735214 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.735221 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.735228 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.735234 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.735241 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.735247 | orchestrator | 2025-06-02 20:08:45.735254 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 20:08:45.735261 | orchestrator | Monday 02 June 2025 19:58:39 +0000 (0:00:00.595) 0:00:28.183 *********** 2025-06-02 20:08:45.735268 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.735274 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.735280 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.735286 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.735293 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.735306 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.735313 | orchestrator | 2025-06-02 20:08:45.735319 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 20:08:45.735326 | orchestrator | Monday 02 June 2025 19:58:40 +0000 (0:00:00.963) 0:00:29.146 *********** 2025-06-02 20:08:45.735332 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.735339 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.735346 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.735352 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.735359 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.735366 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.735372 | orchestrator | 2025-06-02 20:08:45.735378 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 20:08:45.735385 | orchestrator | Monday 02 June 2025 19:58:41 +0000 (0:00:00.810) 0:00:29.957 *********** 2025-06-02 20:08:45.735392 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.735398 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.735404 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.735410 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.735416 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.735422 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.735428 | orchestrator | 2025-06-02 20:08:45.735434 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 20:08:45.735440 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:01.018) 0:00:30.975 *********** 2025-06-02 20:08:45.735446 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.735453 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.735459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.735466 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.735473 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.735480 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.735513 | orchestrator | 2025-06-02 20:08:45.735520 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 20:08:45.735526 | orchestrator | Monday 02 June 2025 19:58:43 +0000 (0:00:00.713) 0:00:31.688 *********** 2025-06-02 20:08:45.735538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71', 'scsi-SQEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part1', 'scsi-SQEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part14', 'scsi-SQEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part15', 'scsi-SQEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part16', 'scsi-SQEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.735693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.735706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01', 'scsi-SQEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part1', 'scsi-SQEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part14', 'scsi-SQEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part15', 'scsi-SQEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part16', 'scsi-SQEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.735853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.735860 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.735875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735921 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.735927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.735951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2', 'scsi-SQEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part1', 'scsi-SQEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part14', 'scsi-SQEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part15', 'scsi-SQEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part16', 'scsi-SQEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5468daec--208d--5ea7--b544--bcde6bebed84-osd--block--5468daec--208d--5ea7--b544--bcde6bebed84', 'dm-uuid-LVM-WMIWgYgOFk5ve8pvyr1nTHKEdH5fxpgS1EwOKDiC5TmWopEDT2MKqICjuO1Jttyn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d0ca6db9--1635--53d8--80de--4807c4d987bd-osd--block--d0ca6db9--1635--53d8--80de--4807c4d987bd', 'dm-uuid-LVM-ECchxFJiM7QA1jYtezbX90EZmKKpcqLEsHKrqJ11nIfhHbi0lk5eP32LYNNj2Hwy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736576 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.736583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b573976--5050--5314--b52d--708d81144fb3-osd--block--0b573976--5050--5314--b52d--708d81144fb3', 'dm-uuid-LVM-1ieb0bhxLuo1kHWLx7lbi5QD13h2huVqw3KwvjcWks8X7FvRPYMdCLNXWvRgVFsa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5468daec--208d--5ea7--b544--bcde6bebed84-osd--block--5468daec--208d--5ea7--b544--bcde6bebed84'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lrBEGl-yw2Y-BdE1-rDP5-YlEE-ZosO-hDZ9bW', 'scsi-0QEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250', 'scsi-SQEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1dc535ca--7422--5c6b--b80a--593b3887af48-osd--block--1dc535ca--7422--5c6b--b80a--593b3887af48', 'dm-uuid-LVM-LoHkm5olbES90WwMvikiRHIidohw4vuw5S041h1adMdpSXokKEv2Nsailu7a9QH4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d0ca6db9--1635--53d8--80de--4807c4d987bd-osd--block--d0ca6db9--1635--53d8--80de--4807c4d987bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-feoeHi-pPOh-J9cI-uId5-a6oN-6vwN-1Fyx2n', 'scsi-0QEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773', 'scsi-SQEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335', 'scsi-SQEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736690 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.736696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b51fe1f--19f9--5db6--a741--38088f1d71cf-osd--block--1b51fe1f--19f9--5db6--a741--38088f1d71cf', 'dm-uuid-LVM-GuD4Jm0I7W9dotSu8GihbrGJp815o6d3uFyVPxNhhoeqbWy7mkQpQj1enCIgUfPw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2dc54921--ef42--515a--84de--1f3d0e017dc1-osd--block--2dc54921--ef42--515a--84de--1f3d0e017dc1', 'dm-uuid-LVM-1aGrVgeJpeKfYtgTckKmxRoVB5YYOvQiZIwOikGdOr7fackyeqw1WIXsxOYiO8iB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:08:45.736910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part1', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part14', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part15', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part16', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0b573976--5050--5314--b52d--708d81144fb3-osd--block--0b573976--5050--5314--b52d--708d81144fb3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PTVbW2-YvR3-vTqK-UVZC-wNKM-c7G3-38YEyq', 'scsi-0QEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696', 'scsi-SQEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1b51fe1f--19f9--5db6--a741--38088f1d71cf-osd--block--1b51fe1f--19f9--5db6--a741--38088f1d71cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dVFu7w-JCsN-X8aA-UVLS-mzXn-63P3-CNrvfa', 'scsi-0QEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76', 'scsi-SQEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1dc535ca--7422--5c6b--b80a--593b3887af48-osd--block--1dc535ca--7422--5c6b--b80a--593b3887af48'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Zkzeu-y56r-nEpa-frJC-TkLT-wBpE-VCRmuy', 'scsi-0QEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4', 'scsi-SQEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2dc54921--ef42--515a--84de--1f3d0e017dc1-osd--block--2dc54921--ef42--515a--84de--1f3d0e017dc1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DDdNZa-ucWj-2nM9-Whe6-n6xS-1kw3-n4Xe5i', 'scsi-0QEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6', 'scsi-SQEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f', 'scsi-SQEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db', 'scsi-SQEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.736994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:08:45.737000 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.737009 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.737015 | orchestrator | 2025-06-02 20:08:45.737021 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 20:08:45.737027 | orchestrator | Monday 02 June 2025 19:58:44 +0000 (0:00:01.642) 0:00:33.331 *********** 2025-06-02 20:08:45.737033 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737040 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737045 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737059 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737066 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737072 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737082 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737089 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737101 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71', 'scsi-SQEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part1', 'scsi-SQEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part14', 'scsi-SQEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part15', 'scsi-SQEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part16', 'scsi-SQEMU_QEMU_HARDDISK_794ae5fd-3701-41a4-bdcf-eea74a87ef71-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737170 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737183 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.737190 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737197 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737204 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737216 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737227 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737234 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737246 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737252 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737263 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01', 'scsi-SQEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part1', 'scsi-SQEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part14', 'scsi-SQEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part15', 'scsi-SQEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part16', 'scsi-SQEMU_QEMU_HARDDISK_43f457e2-7039-41c2-9765-8ce3083f4c01-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737276 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737287 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737294 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737306 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737313 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737323 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737331 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737342 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737349 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737362 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2', 'scsi-SQEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part1', 'scsi-SQEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part14', 'scsi-SQEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part15', 'scsi-SQEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part16', 'scsi-SQEMU_QEMU_HARDDISK_712c0a2f-f072-4fe5-8606-72d3e6b109d2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737374 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737381 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.737393 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5468daec--208d--5ea7--b544--bcde6bebed84-osd--block--5468daec--208d--5ea7--b544--bcde6bebed84', 'dm-uuid-LVM-WMIWgYgOFk5ve8pvyr1nTHKEdH5fxpgS1EwOKDiC5TmWopEDT2MKqICjuO1Jttyn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d0ca6db9--1635--53d8--80de--4807c4d987bd-osd--block--d0ca6db9--1635--53d8--80de--4807c4d987bd', 'dm-uuid-LVM-ECchxFJiM7QA1jYtezbX90EZmKKpcqLEsHKrqJ11nIfhHbi0lk5eP32LYNNj2Hwy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737417 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737443 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737460 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737477 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737502 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5468daec--208d--5ea7--b544--bcde6bebed84-osd--block--5468daec--208d--5ea7--b544--bcde6bebed84'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lrBEGl-yw2Y-BdE1-rDP5-YlEE-ZosO-hDZ9bW', 'scsi-0QEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250', 'scsi-SQEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d0ca6db9--1635--53d8--80de--4807c4d987bd-osd--block--d0ca6db9--1635--53d8--80de--4807c4d987bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-feoeHi-pPOh-J9cI-uId5-a6oN-6vwN-1Fyx2n', 'scsi-0QEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773', 'scsi-SQEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335', 'scsi-SQEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737530 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.737537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737725 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b573976--5050--5314--b52d--708d81144fb3-osd--block--0b573976--5050--5314--b52d--708d81144fb3', 'dm-uuid-LVM-1ieb0bhxLuo1kHWLx7lbi5QD13h2huVqw3KwvjcWks8X7FvRPYMdCLNXWvRgVFsa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737775 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1dc535ca--7422--5c6b--b80a--593b3887af48-osd--block--1dc535ca--7422--5c6b--b80a--593b3887af48', 'dm-uuid-LVM-LoHkm5olbES90WwMvikiRHIidohw4vuw5S041h1adMdpSXokKEv2Nsailu7a9QH4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737790 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737802 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737810 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737843 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.737848 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737854 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737865 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737911 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0b573976--5050--5314--b52d--708d81144fb3-osd--block--0b573976--5050--5314--b52d--708d81144fb3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PTVbW2-YvR3-vTqK-UVZC-wNKM-c7G3-38YEyq', 'scsi-0QEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696', 'scsi-SQEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1dc535ca--7422--5c6b--b80a--593b3887af48-osd--block--1dc535ca--7422--5c6b--b80a--593b3887af48'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Zkzeu-y56r-nEpa-frJC-TkLT-wBpE-VCRmuy', 'scsi-0QEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4', 'scsi-SQEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737929 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db', 'scsi-SQEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.737942 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.737953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b51fe1f--19f9--5db6--a741--38088f1d71cf-osd--block--1b51fe1f--19f9--5db6--a741--38088f1d71cf', 'dm-uuid-LVM-GuD4Jm0I7W9dotSu8GihbrGJp815o6d3uFyVPxNhhoeqbWy7mkQpQj1enCIgUfPw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738348 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2dc54921--ef42--515a--84de--1f3d0e017dc1-osd--block--2dc54921--ef42--515a--84de--1f3d0e017dc1', 'dm-uuid-LVM-1aGrVgeJpeKfYtgTckKmxRoVB5YYOvQiZIwOikGdOr7fackyeqw1WIXsxOYiO8iB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738363 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738383 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738390 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738413 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738420 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738425 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738432 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738452 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part1', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part14', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part15', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part16', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738465 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1b51fe1f--19f9--5db6--a741--38088f1d71cf-osd--block--1b51fe1f--19f9--5db6--a741--38088f1d71cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dVFu7w-JCsN-X8aA-UVLS-mzXn-63P3-CNrvfa', 'scsi-0QEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76', 'scsi-SQEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2dc54921--ef42--515a--84de--1f3d0e017dc1-osd--block--2dc54921--ef42--515a--84de--1f3d0e017dc1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DDdNZa-ucWj-2nM9-Whe6-n6xS-1kw3-n4Xe5i', 'scsi-0QEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6', 'scsi-SQEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738483 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f', 'scsi-SQEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:08:45.738500 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.738507 | orchestrator | 2025-06-02 20:08:45.738513 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 20:08:45.738519 | orchestrator | Monday 02 June 2025 19:58:46 +0000 (0:00:01.808) 0:00:35.140 *********** 2025-06-02 20:08:45.738526 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.738532 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.738538 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.738548 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.738554 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.738560 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.738567 | orchestrator | 2025-06-02 20:08:45.738573 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 20:08:45.738580 | orchestrator | Monday 02 June 2025 19:58:48 +0000 (0:00:01.744) 0:00:36.884 *********** 2025-06-02 20:08:45.738586 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.738593 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.738599 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.738606 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.738612 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.738619 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.738625 | orchestrator | 2025-06-02 20:08:45.738632 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 20:08:45.738638 | orchestrator | Monday 02 June 2025 19:58:49 +0000 (0:00:01.091) 0:00:37.976 *********** 2025-06-02 20:08:45.738644 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.738651 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.738657 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.738664 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.738670 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.738677 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.738683 | orchestrator | 2025-06-02 20:08:45.738689 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 20:08:45.738695 | orchestrator | Monday 02 June 2025 19:58:50 +0000 (0:00:01.001) 0:00:38.977 *********** 2025-06-02 20:08:45.738701 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.738708 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.738714 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.738720 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.738727 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.738734 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.738740 | orchestrator | 2025-06-02 20:08:45.738772 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 20:08:45.738780 | orchestrator | Monday 02 June 2025 19:58:51 +0000 (0:00:00.614) 0:00:39.592 *********** 2025-06-02 20:08:45.738786 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.738792 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.738799 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.738805 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.738811 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.738818 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.738824 | orchestrator | 2025-06-02 20:08:45.738830 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 20:08:45.738922 | orchestrator | Monday 02 June 2025 19:58:52 +0000 (0:00:00.902) 0:00:40.494 *********** 2025-06-02 20:08:45.738929 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.738935 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.738941 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.738947 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.738953 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.738966 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.738972 | orchestrator | 2025-06-02 20:08:45.738979 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 20:08:45.738985 | orchestrator | Monday 02 June 2025 19:58:52 +0000 (0:00:00.703) 0:00:41.197 *********** 2025-06-02 20:08:45.738992 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:08:45.738998 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-02 20:08:45.739004 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 20:08:45.739011 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-02 20:08:45.739018 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 20:08:45.739024 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-02 20:08:45.739031 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 20:08:45.739042 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 20:08:45.739049 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 20:08:45.739055 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-02 20:08:45.739062 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 20:08:45.739069 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-02 20:08:45.739075 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 20:08:45.739081 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 20:08:45.739088 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 20:08:45.739094 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 20:08:45.739101 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-02 20:08:45.739108 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 20:08:45.739114 | orchestrator | 2025-06-02 20:08:45.739121 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 20:08:45.739127 | orchestrator | Monday 02 June 2025 19:58:56 +0000 (0:00:03.550) 0:00:44.748 *********** 2025-06-02 20:08:45.739134 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:08:45.739141 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:08:45.739147 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:08:45.739155 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.739162 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 20:08:45.739168 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 20:08:45.739175 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 20:08:45.739182 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.739189 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 20:08:45.739196 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 20:08:45.739203 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 20:08:45.739209 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.739222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 20:08:45.739228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 20:08:45.739235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 20:08:45.739241 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.739248 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 20:08:45.739254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 20:08:45.739261 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 20:08:45.739267 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.739273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 20:08:45.739279 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 20:08:45.739285 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 20:08:45.739297 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.739304 | orchestrator | 2025-06-02 20:08:45.739310 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 20:08:45.739317 | orchestrator | Monday 02 June 2025 19:58:57 +0000 (0:00:00.970) 0:00:45.719 *********** 2025-06-02 20:08:45.739322 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.739329 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.739335 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.739343 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.739350 | orchestrator | 2025-06-02 20:08:45.739356 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 20:08:45.739364 | orchestrator | Monday 02 June 2025 19:58:58 +0000 (0:00:01.161) 0:00:46.881 *********** 2025-06-02 20:08:45.739371 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.739377 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.739383 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.739389 | orchestrator | 2025-06-02 20:08:45.739396 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 20:08:45.739402 | orchestrator | Monday 02 June 2025 19:58:59 +0000 (0:00:00.563) 0:00:47.445 *********** 2025-06-02 20:08:45.739409 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.739415 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.739421 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.739428 | orchestrator | 2025-06-02 20:08:45.739434 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 20:08:45.739440 | orchestrator | Monday 02 June 2025 19:58:59 +0000 (0:00:00.562) 0:00:48.007 *********** 2025-06-02 20:08:45.739447 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.739453 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.739459 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.739466 | orchestrator | 2025-06-02 20:08:45.739472 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 20:08:45.739478 | orchestrator | Monday 02 June 2025 19:58:59 +0000 (0:00:00.337) 0:00:48.345 *********** 2025-06-02 20:08:45.739485 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.739491 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.739498 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.739504 | orchestrator | 2025-06-02 20:08:45.739509 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 20:08:45.739515 | orchestrator | Monday 02 June 2025 19:59:00 +0000 (0:00:00.392) 0:00:48.737 *********** 2025-06-02 20:08:45.739521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.739526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.739532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.739538 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.739544 | orchestrator | 2025-06-02 20:08:45.739554 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 20:08:45.739560 | orchestrator | Monday 02 June 2025 19:59:00 +0000 (0:00:00.325) 0:00:49.063 *********** 2025-06-02 20:08:45.739567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.739573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.739579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.739586 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.739592 | orchestrator | 2025-06-02 20:08:45.739599 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 20:08:45.739606 | orchestrator | Monday 02 June 2025 19:59:01 +0000 (0:00:00.434) 0:00:49.498 *********** 2025-06-02 20:08:45.739610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.739618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.739623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.739629 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.739635 | orchestrator | 2025-06-02 20:08:45.739642 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 20:08:45.739648 | orchestrator | Monday 02 June 2025 19:59:01 +0000 (0:00:00.620) 0:00:50.120 *********** 2025-06-02 20:08:45.739654 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.739661 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.739667 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.739673 | orchestrator | 2025-06-02 20:08:45.739680 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 20:08:45.739684 | orchestrator | Monday 02 June 2025 19:59:02 +0000 (0:00:00.556) 0:00:50.677 *********** 2025-06-02 20:08:45.739688 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 20:08:45.739692 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 20:08:45.739696 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 20:08:45.739699 | orchestrator | 2025-06-02 20:08:45.739703 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 20:08:45.739707 | orchestrator | Monday 02 June 2025 19:59:03 +0000 (0:00:00.741) 0:00:51.418 *********** 2025-06-02 20:08:45.739719 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:08:45.739726 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:08:45.739732 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:08:45.739738 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 20:08:45.739744 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 20:08:45.739798 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 20:08:45.739805 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 20:08:45.739811 | orchestrator | 2025-06-02 20:08:45.739817 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 20:08:45.739823 | orchestrator | Monday 02 June 2025 19:59:03 +0000 (0:00:00.758) 0:00:52.176 *********** 2025-06-02 20:08:45.739828 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:08:45.739835 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:08:45.739840 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:08:45.739847 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 20:08:45.739853 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 20:08:45.739859 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 20:08:45.739865 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 20:08:45.739872 | orchestrator | 2025-06-02 20:08:45.739878 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:08:45.739884 | orchestrator | Monday 02 June 2025 19:59:05 +0000 (0:00:01.953) 0:00:54.130 *********** 2025-06-02 20:08:45.739892 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.739996 | orchestrator | 2025-06-02 20:08:45.740004 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:08:45.740010 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:01.175) 0:00:55.305 *********** 2025-06-02 20:08:45.740017 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.740031 | orchestrator | 2025-06-02 20:08:45.740038 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:08:45.740044 | orchestrator | Monday 02 June 2025 19:59:08 +0000 (0:00:01.529) 0:00:56.835 *********** 2025-06-02 20:08:45.740050 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.740057 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.740063 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.740069 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.740073 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.740077 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.740081 | orchestrator | 2025-06-02 20:08:45.740085 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:08:45.740089 | orchestrator | Monday 02 June 2025 19:59:09 +0000 (0:00:01.225) 0:00:58.060 *********** 2025-06-02 20:08:45.740093 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740101 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740104 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740108 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.740112 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.740116 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.740120 | orchestrator | 2025-06-02 20:08:45.740124 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:08:45.740127 | orchestrator | Monday 02 June 2025 19:59:11 +0000 (0:00:01.713) 0:00:59.774 *********** 2025-06-02 20:08:45.740131 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740135 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740139 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740143 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.740150 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.740156 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.740162 | orchestrator | 2025-06-02 20:08:45.740168 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:08:45.740174 | orchestrator | Monday 02 June 2025 19:59:12 +0000 (0:00:01.462) 0:01:01.236 *********** 2025-06-02 20:08:45.740181 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740187 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740193 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740199 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.740206 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.740212 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.740218 | orchestrator | 2025-06-02 20:08:45.740224 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:08:45.740231 | orchestrator | Monday 02 June 2025 19:59:13 +0000 (0:00:01.113) 0:01:02.349 *********** 2025-06-02 20:08:45.740236 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.740243 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.740249 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.740255 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.740262 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.740268 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.740274 | orchestrator | 2025-06-02 20:08:45.740280 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:08:45.740286 | orchestrator | Monday 02 June 2025 19:59:15 +0000 (0:00:01.185) 0:01:03.535 *********** 2025-06-02 20:08:45.740301 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740307 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740313 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740320 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.740326 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.740332 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.740338 | orchestrator | 2025-06-02 20:08:45.740344 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:08:45.740351 | orchestrator | Monday 02 June 2025 19:59:15 +0000 (0:00:00.812) 0:01:04.348 *********** 2025-06-02 20:08:45.740362 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740369 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740375 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740381 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.740387 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.740393 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.740400 | orchestrator | 2025-06-02 20:08:45.740406 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:08:45.740413 | orchestrator | Monday 02 June 2025 19:59:17 +0000 (0:00:01.205) 0:01:05.553 *********** 2025-06-02 20:08:45.740419 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.740425 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.740431 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.740437 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.740443 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.740450 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.740456 | orchestrator | 2025-06-02 20:08:45.740462 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:08:45.740469 | orchestrator | Monday 02 June 2025 19:59:18 +0000 (0:00:01.491) 0:01:07.045 *********** 2025-06-02 20:08:45.740475 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.740481 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.740487 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.740493 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.740499 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.740507 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.740511 | orchestrator | 2025-06-02 20:08:45.740515 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:08:45.740519 | orchestrator | Monday 02 June 2025 19:59:20 +0000 (0:00:01.397) 0:01:08.442 *********** 2025-06-02 20:08:45.740523 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740526 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740530 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740534 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.740537 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.740541 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.740545 | orchestrator | 2025-06-02 20:08:45.740549 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:08:45.740552 | orchestrator | Monday 02 June 2025 19:59:20 +0000 (0:00:00.603) 0:01:09.046 *********** 2025-06-02 20:08:45.740556 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.740560 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.740564 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.740568 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.740572 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.740578 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.740584 | orchestrator | 2025-06-02 20:08:45.740590 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:08:45.740596 | orchestrator | Monday 02 June 2025 19:59:21 +0000 (0:00:01.063) 0:01:10.110 *********** 2025-06-02 20:08:45.740602 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740608 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740615 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740621 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.740628 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.740634 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.740640 | orchestrator | 2025-06-02 20:08:45.740646 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:08:45.740653 | orchestrator | Monday 02 June 2025 19:59:22 +0000 (0:00:00.681) 0:01:10.791 *********** 2025-06-02 20:08:45.740662 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740669 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740675 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740682 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.740693 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.740699 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.740707 | orchestrator | 2025-06-02 20:08:45.740713 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:08:45.740769 | orchestrator | Monday 02 June 2025 19:59:23 +0000 (0:00:00.961) 0:01:11.753 *********** 2025-06-02 20:08:45.740778 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740784 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740790 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740796 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.740803 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.740809 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.740814 | orchestrator | 2025-06-02 20:08:45.740821 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:08:45.740827 | orchestrator | Monday 02 June 2025 19:59:23 +0000 (0:00:00.598) 0:01:12.352 *********** 2025-06-02 20:08:45.740834 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740840 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740847 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740853 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.740859 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.740893 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.740899 | orchestrator | 2025-06-02 20:08:45.740929 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:08:45.740936 | orchestrator | Monday 02 June 2025 19:59:24 +0000 (0:00:00.761) 0:01:13.113 *********** 2025-06-02 20:08:45.740942 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.740949 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.740955 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.740962 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.740968 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.740975 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.740982 | orchestrator | 2025-06-02 20:08:45.740988 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:08:45.741001 | orchestrator | Monday 02 June 2025 19:59:25 +0000 (0:00:00.575) 0:01:13.688 *********** 2025-06-02 20:08:45.741008 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.741014 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.741021 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.741027 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.741033 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.741040 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.741046 | orchestrator | 2025-06-02 20:08:45.741053 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:08:45.741059 | orchestrator | Monday 02 June 2025 19:59:26 +0000 (0:00:00.746) 0:01:14.435 *********** 2025-06-02 20:08:45.741066 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.741073 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.741079 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.741086 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.741092 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.741099 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.741105 | orchestrator | 2025-06-02 20:08:45.741111 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:08:45.741118 | orchestrator | Monday 02 June 2025 19:59:26 +0000 (0:00:00.764) 0:01:15.199 *********** 2025-06-02 20:08:45.741124 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.741130 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.741137 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.741143 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.741149 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.741156 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.741162 | orchestrator | 2025-06-02 20:08:45.741169 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-02 20:08:45.741181 | orchestrator | Monday 02 June 2025 19:59:28 +0000 (0:00:01.264) 0:01:16.464 *********** 2025-06-02 20:08:45.741188 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.741195 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.741201 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.741207 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.741213 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.741220 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.741226 | orchestrator | 2025-06-02 20:08:45.741232 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-02 20:08:45.741239 | orchestrator | Monday 02 June 2025 19:59:29 +0000 (0:00:01.612) 0:01:18.077 *********** 2025-06-02 20:08:45.741246 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.741252 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.741258 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.741264 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.741271 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.741277 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.741283 | orchestrator | 2025-06-02 20:08:45.741289 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-02 20:08:45.741296 | orchestrator | Monday 02 June 2025 19:59:31 +0000 (0:00:02.011) 0:01:20.089 *********** 2025-06-02 20:08:45.741302 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.741309 | orchestrator | 2025-06-02 20:08:45.741316 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-02 20:08:45.741322 | orchestrator | Monday 02 June 2025 19:59:32 +0000 (0:00:01.243) 0:01:21.332 *********** 2025-06-02 20:08:45.741328 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.741334 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.741341 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.741347 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.741353 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.741360 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.741366 | orchestrator | 2025-06-02 20:08:45.741372 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-02 20:08:45.741387 | orchestrator | Monday 02 June 2025 19:59:33 +0000 (0:00:00.832) 0:01:22.164 *********** 2025-06-02 20:08:45.741394 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.741400 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.741406 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.741412 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.741419 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.741425 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.741431 | orchestrator | 2025-06-02 20:08:45.741438 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-02 20:08:45.741444 | orchestrator | Monday 02 June 2025 19:59:34 +0000 (0:00:00.541) 0:01:22.706 *********** 2025-06-02 20:08:45.741451 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:08:45.741457 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:08:45.741464 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:08:45.741470 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:08:45.741477 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:08:45.741483 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:08:45.741489 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:08:45.741496 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:08:45.741507 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:08:45.741514 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:08:45.741520 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:08:45.741526 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:08:45.741533 | orchestrator | 2025-06-02 20:08:45.741544 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-02 20:08:45.741551 | orchestrator | Monday 02 June 2025 19:59:36 +0000 (0:00:01.707) 0:01:24.413 *********** 2025-06-02 20:08:45.741557 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.741564 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.741570 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.741577 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.741583 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.741590 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.741596 | orchestrator | 2025-06-02 20:08:45.741603 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-02 20:08:45.741609 | orchestrator | Monday 02 June 2025 19:59:36 +0000 (0:00:00.867) 0:01:25.280 *********** 2025-06-02 20:08:45.741616 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.741622 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.741628 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.741635 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.741641 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.741648 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.741655 | orchestrator | 2025-06-02 20:08:45.741662 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-02 20:08:45.741668 | orchestrator | Monday 02 June 2025 19:59:37 +0000 (0:00:00.787) 0:01:26.068 *********** 2025-06-02 20:08:45.741674 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.741711 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.741717 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.741721 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.741725 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.741728 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.741732 | orchestrator | 2025-06-02 20:08:45.741736 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-02 20:08:45.741740 | orchestrator | Monday 02 June 2025 19:59:38 +0000 (0:00:00.565) 0:01:26.633 *********** 2025-06-02 20:08:45.741743 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.741762 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.741767 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.741771 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.741775 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.741779 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.741783 | orchestrator | 2025-06-02 20:08:45.741786 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-02 20:08:45.741790 | orchestrator | Monday 02 June 2025 19:59:39 +0000 (0:00:00.778) 0:01:27.412 *********** 2025-06-02 20:08:45.741794 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.741798 | orchestrator | 2025-06-02 20:08:45.741802 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-02 20:08:45.741806 | orchestrator | Monday 02 June 2025 19:59:40 +0000 (0:00:01.197) 0:01:28.609 *********** 2025-06-02 20:08:45.741810 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.741813 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.741817 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.741821 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.741825 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.741833 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.741837 | orchestrator | 2025-06-02 20:08:45.741841 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-02 20:08:45.741845 | orchestrator | Monday 02 June 2025 20:00:33 +0000 (0:00:52.810) 0:02:21.419 *********** 2025-06-02 20:08:45.741848 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:08:45.741852 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:08:45.741859 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:08:45.741863 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.741867 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:08:45.741871 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:08:45.741874 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:08:45.741878 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.741882 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:08:45.741886 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:08:45.741889 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:08:45.741893 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.741897 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:08:45.741900 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:08:45.741904 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:08:45.741908 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.741912 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:08:45.741916 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:08:45.741919 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:08:45.741923 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.741927 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:08:45.741931 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:08:45.741934 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:08:45.741942 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.741946 | orchestrator | 2025-06-02 20:08:45.741950 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-02 20:08:45.741954 | orchestrator | Monday 02 June 2025 20:00:34 +0000 (0:00:00.946) 0:02:22.366 *********** 2025-06-02 20:08:45.741957 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.741961 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.741965 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.741969 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.741975 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.741981 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.741987 | orchestrator | 2025-06-02 20:08:45.741993 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-02 20:08:45.742000 | orchestrator | Monday 02 June 2025 20:00:34 +0000 (0:00:00.550) 0:02:22.916 *********** 2025-06-02 20:08:45.742006 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742049 | orchestrator | 2025-06-02 20:08:45.742058 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-02 20:08:45.742064 | orchestrator | Monday 02 June 2025 20:00:34 +0000 (0:00:00.149) 0:02:23.066 *********** 2025-06-02 20:08:45.742071 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742078 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742090 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742097 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742103 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742108 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742114 | orchestrator | 2025-06-02 20:08:45.742119 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-02 20:08:45.742125 | orchestrator | Monday 02 June 2025 20:00:35 +0000 (0:00:00.923) 0:02:23.989 *********** 2025-06-02 20:08:45.742132 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742138 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742145 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742150 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742157 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742162 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742169 | orchestrator | 2025-06-02 20:08:45.742176 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-02 20:08:45.742183 | orchestrator | Monday 02 June 2025 20:00:36 +0000 (0:00:00.798) 0:02:24.788 *********** 2025-06-02 20:08:45.742189 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742195 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742201 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742207 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742214 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742220 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742226 | orchestrator | 2025-06-02 20:08:45.742233 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-02 20:08:45.742239 | orchestrator | Monday 02 June 2025 20:00:37 +0000 (0:00:00.738) 0:02:25.527 *********** 2025-06-02 20:08:45.742246 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.742253 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.742259 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.742265 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.742271 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.742277 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.742283 | orchestrator | 2025-06-02 20:08:45.742290 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-02 20:08:45.742296 | orchestrator | Monday 02 June 2025 20:00:39 +0000 (0:00:02.108) 0:02:27.635 *********** 2025-06-02 20:08:45.742302 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.742309 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.742316 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.742320 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.742325 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.742331 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.742337 | orchestrator | 2025-06-02 20:08:45.742343 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-02 20:08:45.742354 | orchestrator | Monday 02 June 2025 20:00:40 +0000 (0:00:00.786) 0:02:28.422 *********** 2025-06-02 20:08:45.742361 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.742369 | orchestrator | 2025-06-02 20:08:45.742376 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-02 20:08:45.742382 | orchestrator | Monday 02 June 2025 20:00:41 +0000 (0:00:01.000) 0:02:29.422 *********** 2025-06-02 20:08:45.742388 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742394 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742401 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742407 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742413 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742420 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742425 | orchestrator | 2025-06-02 20:08:45.742431 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-02 20:08:45.742437 | orchestrator | Monday 02 June 2025 20:00:41 +0000 (0:00:00.671) 0:02:30.094 *********** 2025-06-02 20:08:45.742449 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742455 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742462 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742468 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742474 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742481 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742487 | orchestrator | 2025-06-02 20:08:45.742493 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-02 20:08:45.742499 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.830) 0:02:30.924 *********** 2025-06-02 20:08:45.742506 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742512 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742518 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742524 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742530 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742537 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742543 | orchestrator | 2025-06-02 20:08:45.742549 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-02 20:08:45.742562 | orchestrator | Monday 02 June 2025 20:00:43 +0000 (0:00:00.602) 0:02:31.527 *********** 2025-06-02 20:08:45.742568 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742574 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742580 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742587 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742593 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742599 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742605 | orchestrator | 2025-06-02 20:08:45.742611 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-02 20:08:45.742617 | orchestrator | Monday 02 June 2025 20:00:43 +0000 (0:00:00.774) 0:02:32.302 *********** 2025-06-02 20:08:45.742623 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742629 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742635 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742641 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742648 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742654 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742660 | orchestrator | 2025-06-02 20:08:45.742666 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-02 20:08:45.742672 | orchestrator | Monday 02 June 2025 20:00:44 +0000 (0:00:00.617) 0:02:32.919 *********** 2025-06-02 20:08:45.742678 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742684 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742689 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742695 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742701 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742707 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742712 | orchestrator | 2025-06-02 20:08:45.742719 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-02 20:08:45.742724 | orchestrator | Monday 02 June 2025 20:00:45 +0000 (0:00:00.885) 0:02:33.804 *********** 2025-06-02 20:08:45.742730 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742736 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742742 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742789 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742797 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742804 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742810 | orchestrator | 2025-06-02 20:08:45.742817 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-02 20:08:45.742823 | orchestrator | Monday 02 June 2025 20:00:46 +0000 (0:00:00.674) 0:02:34.479 *********** 2025-06-02 20:08:45.742830 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.742836 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.742850 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.742857 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.742863 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.742869 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.742875 | orchestrator | 2025-06-02 20:08:45.742882 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-02 20:08:45.742886 | orchestrator | Monday 02 June 2025 20:00:46 +0000 (0:00:00.688) 0:02:35.168 *********** 2025-06-02 20:08:45.742890 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.742895 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.742898 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.742902 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.742906 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.742910 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.742914 | orchestrator | 2025-06-02 20:08:45.742918 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-02 20:08:45.742922 | orchestrator | Monday 02 June 2025 20:00:47 +0000 (0:00:01.039) 0:02:36.208 *********** 2025-06-02 20:08:45.742926 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.742931 | orchestrator | 2025-06-02 20:08:45.742940 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-02 20:08:45.742944 | orchestrator | Monday 02 June 2025 20:00:48 +0000 (0:00:01.012) 0:02:37.221 *********** 2025-06-02 20:08:45.742950 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-02 20:08:45.742957 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-02 20:08:45.742963 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-02 20:08:45.742969 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-02 20:08:45.742976 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-02 20:08:45.742982 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-02 20:08:45.742989 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-02 20:08:45.742995 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-02 20:08:45.743001 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-02 20:08:45.743008 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-02 20:08:45.743014 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-02 20:08:45.743021 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-02 20:08:45.743027 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-02 20:08:45.743033 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-02 20:08:45.743040 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-02 20:08:45.743047 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-02 20:08:45.743053 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-02 20:08:45.743060 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-02 20:08:45.743066 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-02 20:08:45.743073 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-02 20:08:45.743079 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-02 20:08:45.743102 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-02 20:08:45.743108 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-02 20:08:45.743115 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-02 20:08:45.743121 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-02 20:08:45.743127 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-02 20:08:45.743134 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-02 20:08:45.743140 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-02 20:08:45.743151 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-02 20:08:45.743156 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-02 20:08:45.743162 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-02 20:08:45.743168 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-02 20:08:45.743174 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-02 20:08:45.743180 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-02 20:08:45.743186 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-02 20:08:45.743192 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-02 20:08:45.743198 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-02 20:08:45.743204 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-02 20:08:45.743210 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-02 20:08:45.743216 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-02 20:08:45.743223 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:08:45.743229 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-02 20:08:45.743235 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:08:45.743241 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:08:45.743247 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-02 20:08:45.743253 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:08:45.743260 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:08:45.743266 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:08:45.743272 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:08:45.743278 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:08:45.743285 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:08:45.743291 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:08:45.743297 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:08:45.743304 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:08:45.743310 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:08:45.743316 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:08:45.743323 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:08:45.743329 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:08:45.743335 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:08:45.743345 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:08:45.743352 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:08:45.743359 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:08:45.743365 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:08:45.743372 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:08:45.743378 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:08:45.743385 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:08:45.743391 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:08:45.743397 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:08:45.743407 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:08:45.743421 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:08:45.743430 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:08:45.743436 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:08:45.743443 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:08:45.743449 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:08:45.743456 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:08:45.743462 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:08:45.743469 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:08:45.743475 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:08:45.743482 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:08:45.743495 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:08:45.743503 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:08:45.743509 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-02 20:08:45.743515 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-02 20:08:45.743521 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-02 20:08:45.743528 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:08:45.743534 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:08:45.743541 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-02 20:08:45.743547 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-02 20:08:45.743553 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-02 20:08:45.743559 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-02 20:08:45.743565 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-02 20:08:45.743572 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-02 20:08:45.743578 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:08:45.743662 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-02 20:08:45.743676 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-02 20:08:45.743683 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-02 20:08:45.743689 | orchestrator | 2025-06-02 20:08:45.743696 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-02 20:08:45.743702 | orchestrator | Monday 02 June 2025 20:00:55 +0000 (0:00:06.822) 0:02:44.043 *********** 2025-06-02 20:08:45.743709 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.743715 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.743721 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.743728 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.743735 | orchestrator | 2025-06-02 20:08:45.743742 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-02 20:08:45.743764 | orchestrator | Monday 02 June 2025 20:00:56 +0000 (0:00:00.873) 0:02:44.917 *********** 2025-06-02 20:08:45.743771 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.743778 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.743784 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.743798 | orchestrator | 2025-06-02 20:08:45.743805 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-02 20:08:45.743811 | orchestrator | Monday 02 June 2025 20:00:57 +0000 (0:00:00.615) 0:02:45.533 *********** 2025-06-02 20:08:45.743818 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.743824 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.743836 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.743842 | orchestrator | 2025-06-02 20:08:45.743849 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-02 20:08:45.743855 | orchestrator | Monday 02 June 2025 20:00:58 +0000 (0:00:01.321) 0:02:46.855 *********** 2025-06-02 20:08:45.743861 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.743867 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.743873 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.743880 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.743886 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.743893 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.743899 | orchestrator | 2025-06-02 20:08:45.743905 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-02 20:08:45.743912 | orchestrator | Monday 02 June 2025 20:00:59 +0000 (0:00:00.523) 0:02:47.378 *********** 2025-06-02 20:08:45.743918 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.743924 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.743931 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.743937 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.743944 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.743950 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.743956 | orchestrator | 2025-06-02 20:08:45.743963 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-02 20:08:45.743969 | orchestrator | Monday 02 June 2025 20:00:59 +0000 (0:00:00.904) 0:02:48.283 *********** 2025-06-02 20:08:45.743976 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.743982 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.743988 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.743994 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744000 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744006 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744012 | orchestrator | 2025-06-02 20:08:45.744018 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-02 20:08:45.744024 | orchestrator | Monday 02 June 2025 20:01:00 +0000 (0:00:00.602) 0:02:48.886 *********** 2025-06-02 20:08:45.744030 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744036 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744050 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744057 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744063 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744069 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744075 | orchestrator | 2025-06-02 20:08:45.744082 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-02 20:08:45.744088 | orchestrator | Monday 02 June 2025 20:01:01 +0000 (0:00:00.581) 0:02:49.468 *********** 2025-06-02 20:08:45.744095 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744101 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744107 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744113 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744120 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744126 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744132 | orchestrator | 2025-06-02 20:08:45.744139 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-02 20:08:45.744151 | orchestrator | Monday 02 June 2025 20:01:01 +0000 (0:00:00.543) 0:02:50.011 *********** 2025-06-02 20:08:45.744157 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744164 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744170 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744176 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744183 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744189 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744195 | orchestrator | 2025-06-02 20:08:45.744202 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-02 20:08:45.744208 | orchestrator | Monday 02 June 2025 20:01:02 +0000 (0:00:00.754) 0:02:50.766 *********** 2025-06-02 20:08:45.744215 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744221 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744227 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744232 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744238 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744245 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744252 | orchestrator | 2025-06-02 20:08:45.744259 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-02 20:08:45.744265 | orchestrator | Monday 02 June 2025 20:01:02 +0000 (0:00:00.583) 0:02:51.350 *********** 2025-06-02 20:08:45.744271 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744277 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744284 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744290 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744296 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744303 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744309 | orchestrator | 2025-06-02 20:08:45.744315 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-02 20:08:45.744322 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.813) 0:02:52.163 *********** 2025-06-02 20:08:45.744328 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744334 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744341 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744347 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.744353 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.744360 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.744366 | orchestrator | 2025-06-02 20:08:45.744372 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-02 20:08:45.744379 | orchestrator | Monday 02 June 2025 20:01:06 +0000 (0:00:03.107) 0:02:55.271 *********** 2025-06-02 20:08:45.744385 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744391 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744398 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744404 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.744410 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.744416 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.744422 | orchestrator | 2025-06-02 20:08:45.744429 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-02 20:08:45.744445 | orchestrator | Monday 02 June 2025 20:01:07 +0000 (0:00:01.014) 0:02:56.285 *********** 2025-06-02 20:08:45.744451 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744458 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744464 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744470 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.744476 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.744483 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.744489 | orchestrator | 2025-06-02 20:08:45.744496 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-02 20:08:45.744502 | orchestrator | Monday 02 June 2025 20:01:08 +0000 (0:00:00.719) 0:02:57.005 *********** 2025-06-02 20:08:45.744513 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744519 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744526 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744532 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744538 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744544 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744551 | orchestrator | 2025-06-02 20:08:45.744556 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-02 20:08:45.744562 | orchestrator | Monday 02 June 2025 20:01:09 +0000 (0:00:00.676) 0:02:57.682 *********** 2025-06-02 20:08:45.744569 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744575 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744582 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744588 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.744595 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.744601 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.744608 | orchestrator | 2025-06-02 20:08:45.744614 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-02 20:08:45.744625 | orchestrator | Monday 02 June 2025 20:01:09 +0000 (0:00:00.549) 0:02:58.231 *********** 2025-06-02 20:08:45.744631 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744638 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744644 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744652 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-02 20:08:45.744661 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-02 20:08:45.744669 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744676 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-02 20:08:45.744682 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-02 20:08:45.744689 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744695 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-02 20:08:45.744701 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-02 20:08:45.744709 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744721 | orchestrator | 2025-06-02 20:08:45.744727 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-02 20:08:45.744734 | orchestrator | Monday 02 June 2025 20:01:10 +0000 (0:00:00.734) 0:02:58.966 *********** 2025-06-02 20:08:45.744740 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744761 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744768 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744775 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744781 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744787 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744793 | orchestrator | 2025-06-02 20:08:45.744803 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-02 20:08:45.744809 | orchestrator | Monday 02 June 2025 20:01:11 +0000 (0:00:00.496) 0:02:59.462 *********** 2025-06-02 20:08:45.744814 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744820 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744825 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744831 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744838 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744842 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744846 | orchestrator | 2025-06-02 20:08:45.744850 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 20:08:45.744854 | orchestrator | Monday 02 June 2025 20:01:11 +0000 (0:00:00.616) 0:03:00.079 *********** 2025-06-02 20:08:45.744858 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744862 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744866 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744870 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744873 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744877 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744881 | orchestrator | 2025-06-02 20:08:45.744885 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 20:08:45.744889 | orchestrator | Monday 02 June 2025 20:01:12 +0000 (0:00:00.539) 0:03:00.618 *********** 2025-06-02 20:08:45.744893 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744896 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744900 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744904 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744907 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744911 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744915 | orchestrator | 2025-06-02 20:08:45.744919 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 20:08:45.744923 | orchestrator | Monday 02 June 2025 20:01:13 +0000 (0:00:00.813) 0:03:01.432 *********** 2025-06-02 20:08:45.744926 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744930 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744934 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744941 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.744945 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.744949 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.744953 | orchestrator | 2025-06-02 20:08:45.744957 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 20:08:45.744963 | orchestrator | Monday 02 June 2025 20:01:13 +0000 (0:00:00.612) 0:03:02.045 *********** 2025-06-02 20:08:45.744970 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.744976 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.744982 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.744987 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.744994 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.745000 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.745006 | orchestrator | 2025-06-02 20:08:45.745012 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 20:08:45.745024 | orchestrator | Monday 02 June 2025 20:01:14 +0000 (0:00:01.093) 0:03:03.139 *********** 2025-06-02 20:08:45.745031 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 20:08:45.745037 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 20:08:45.745043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 20:08:45.745048 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.745054 | orchestrator | 2025-06-02 20:08:45.745059 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 20:08:45.745065 | orchestrator | Monday 02 June 2025 20:01:15 +0000 (0:00:00.432) 0:03:03.571 *********** 2025-06-02 20:08:45.745071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 20:08:45.745077 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 20:08:45.745083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 20:08:45.745088 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.745094 | orchestrator | 2025-06-02 20:08:45.745101 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 20:08:45.745107 | orchestrator | Monday 02 June 2025 20:01:15 +0000 (0:00:00.426) 0:03:03.998 *********** 2025-06-02 20:08:45.745113 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 20:08:45.745119 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 20:08:45.745126 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 20:08:45.745130 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.745134 | orchestrator | 2025-06-02 20:08:45.745138 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 20:08:45.745142 | orchestrator | Monday 02 June 2025 20:01:16 +0000 (0:00:00.407) 0:03:04.406 *********** 2025-06-02 20:08:45.745146 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.745149 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.745153 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.745157 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.745161 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.745165 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.745168 | orchestrator | 2025-06-02 20:08:45.745172 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 20:08:45.745176 | orchestrator | Monday 02 June 2025 20:01:16 +0000 (0:00:00.643) 0:03:05.049 *********** 2025-06-02 20:08:45.745180 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-02 20:08:45.745184 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.745188 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-02 20:08:45.745191 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.745195 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-02 20:08:45.745199 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.745202 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 20:08:45.745206 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 20:08:45.745210 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 20:08:45.745214 | orchestrator | 2025-06-02 20:08:45.745222 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-02 20:08:45.745229 | orchestrator | Monday 02 June 2025 20:01:18 +0000 (0:00:01.714) 0:03:06.763 *********** 2025-06-02 20:08:45.745235 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.745241 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.745247 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.745254 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.745360 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.745366 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.745372 | orchestrator | 2025-06-02 20:08:45.745378 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:08:45.745385 | orchestrator | Monday 02 June 2025 20:01:20 +0000 (0:00:02.305) 0:03:09.069 *********** 2025-06-02 20:08:45.745398 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.745405 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.745412 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.745418 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.745424 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.745432 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.745436 | orchestrator | 2025-06-02 20:08:45.745441 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 20:08:45.745447 | orchestrator | Monday 02 June 2025 20:01:21 +0000 (0:00:00.887) 0:03:09.956 *********** 2025-06-02 20:08:45.745453 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.745459 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.745466 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.745473 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.745479 | orchestrator | 2025-06-02 20:08:45.745485 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 20:08:45.745492 | orchestrator | Monday 02 June 2025 20:01:22 +0000 (0:00:00.852) 0:03:10.809 *********** 2025-06-02 20:08:45.745499 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.745505 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.745511 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.745517 | orchestrator | 2025-06-02 20:08:45.745522 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 20:08:45.745537 | orchestrator | Monday 02 June 2025 20:01:22 +0000 (0:00:00.268) 0:03:11.077 *********** 2025-06-02 20:08:45.745582 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.745589 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.745596 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.745602 | orchestrator | 2025-06-02 20:08:45.745609 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 20:08:45.745615 | orchestrator | Monday 02 June 2025 20:01:23 +0000 (0:00:01.274) 0:03:12.351 *********** 2025-06-02 20:08:45.745621 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:08:45.745628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:08:45.745634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:08:45.745641 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.745647 | orchestrator | 2025-06-02 20:08:45.745653 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 20:08:45.745660 | orchestrator | Monday 02 June 2025 20:01:24 +0000 (0:00:00.577) 0:03:12.929 *********** 2025-06-02 20:08:45.745667 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.745673 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.745679 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.745686 | orchestrator | 2025-06-02 20:08:45.745692 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 20:08:45.745699 | orchestrator | Monday 02 June 2025 20:01:24 +0000 (0:00:00.346) 0:03:13.275 *********** 2025-06-02 20:08:45.745705 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.745712 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.745718 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.745725 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.745731 | orchestrator | 2025-06-02 20:08:45.745738 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 20:08:45.745745 | orchestrator | Monday 02 June 2025 20:01:25 +0000 (0:00:00.856) 0:03:14.132 *********** 2025-06-02 20:08:45.745793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.745799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.745806 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.745812 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.745829 | orchestrator | 2025-06-02 20:08:45.745835 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 20:08:45.745841 | orchestrator | Monday 02 June 2025 20:01:26 +0000 (0:00:00.384) 0:03:14.516 *********** 2025-06-02 20:08:45.745848 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.745854 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.745860 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.745866 | orchestrator | 2025-06-02 20:08:45.745873 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 20:08:45.745877 | orchestrator | Monday 02 June 2025 20:01:26 +0000 (0:00:00.321) 0:03:14.838 *********** 2025-06-02 20:08:45.745881 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.745885 | orchestrator | 2025-06-02 20:08:45.745889 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 20:08:45.745893 | orchestrator | Monday 02 June 2025 20:01:26 +0000 (0:00:00.184) 0:03:15.023 *********** 2025-06-02 20:08:45.745896 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.745900 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.745904 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.745908 | orchestrator | 2025-06-02 20:08:45.745911 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 20:08:45.745915 | orchestrator | Monday 02 June 2025 20:01:26 +0000 (0:00:00.249) 0:03:15.273 *********** 2025-06-02 20:08:45.745919 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.745923 | orchestrator | 2025-06-02 20:08:45.745935 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 20:08:45.745942 | orchestrator | Monday 02 June 2025 20:01:27 +0000 (0:00:00.249) 0:03:15.522 *********** 2025-06-02 20:08:45.745948 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.745954 | orchestrator | 2025-06-02 20:08:45.745960 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 20:08:45.745967 | orchestrator | Monday 02 June 2025 20:01:27 +0000 (0:00:00.229) 0:03:15.752 *********** 2025-06-02 20:08:45.745973 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.745979 | orchestrator | 2025-06-02 20:08:45.745986 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 20:08:45.745993 | orchestrator | Monday 02 June 2025 20:01:27 +0000 (0:00:00.378) 0:03:16.130 *********** 2025-06-02 20:08:45.745999 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.746006 | orchestrator | 2025-06-02 20:08:45.746047 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 20:08:45.746055 | orchestrator | Monday 02 June 2025 20:01:28 +0000 (0:00:00.246) 0:03:16.376 *********** 2025-06-02 20:08:45.746062 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.746069 | orchestrator | 2025-06-02 20:08:45.746075 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 20:08:45.746082 | orchestrator | Monday 02 June 2025 20:01:28 +0000 (0:00:00.230) 0:03:16.606 *********** 2025-06-02 20:08:45.746088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.746092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.746096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.746100 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.746104 | orchestrator | 2025-06-02 20:08:45.746108 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 20:08:45.746112 | orchestrator | Monday 02 June 2025 20:01:28 +0000 (0:00:00.432) 0:03:17.038 *********** 2025-06-02 20:08:45.746116 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.746120 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.746123 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.746127 | orchestrator | 2025-06-02 20:08:45.746141 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 20:08:45.746148 | orchestrator | Monday 02 June 2025 20:01:28 +0000 (0:00:00.314) 0:03:17.352 *********** 2025-06-02 20:08:45.746159 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.746165 | orchestrator | 2025-06-02 20:08:45.746171 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 20:08:45.746177 | orchestrator | Monday 02 June 2025 20:01:29 +0000 (0:00:00.197) 0:03:17.550 *********** 2025-06-02 20:08:45.746184 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.746189 | orchestrator | 2025-06-02 20:08:45.746195 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 20:08:45.746201 | orchestrator | Monday 02 June 2025 20:01:29 +0000 (0:00:00.247) 0:03:17.797 *********** 2025-06-02 20:08:45.746206 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.746212 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.746218 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.746224 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.746230 | orchestrator | 2025-06-02 20:08:45.746236 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 20:08:45.746241 | orchestrator | Monday 02 June 2025 20:01:30 +0000 (0:00:01.041) 0:03:18.839 *********** 2025-06-02 20:08:45.746247 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.746252 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.746257 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.746262 | orchestrator | 2025-06-02 20:08:45.746269 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 20:08:45.746275 | orchestrator | Monday 02 June 2025 20:01:30 +0000 (0:00:00.341) 0:03:19.180 *********** 2025-06-02 20:08:45.746281 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.746288 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.746294 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.746300 | orchestrator | 2025-06-02 20:08:45.746305 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 20:08:45.746311 | orchestrator | Monday 02 June 2025 20:01:32 +0000 (0:00:01.295) 0:03:20.475 *********** 2025-06-02 20:08:45.746316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.746322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.746327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.746333 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.746339 | orchestrator | 2025-06-02 20:08:45.746345 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 20:08:45.746351 | orchestrator | Monday 02 June 2025 20:01:33 +0000 (0:00:01.244) 0:03:21.720 *********** 2025-06-02 20:08:45.746357 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.746362 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.746367 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.746373 | orchestrator | 2025-06-02 20:08:45.746380 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 20:08:45.746385 | orchestrator | Monday 02 June 2025 20:01:33 +0000 (0:00:00.477) 0:03:22.197 *********** 2025-06-02 20:08:45.746391 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.746397 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.746403 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.746408 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.746413 | orchestrator | 2025-06-02 20:08:45.746474 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 20:08:45.746480 | orchestrator | Monday 02 June 2025 20:01:34 +0000 (0:00:01.035) 0:03:23.233 *********** 2025-06-02 20:08:45.746485 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.746492 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.746498 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.746504 | orchestrator | 2025-06-02 20:08:45.746518 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 20:08:45.746525 | orchestrator | Monday 02 June 2025 20:01:35 +0000 (0:00:00.344) 0:03:23.577 *********** 2025-06-02 20:08:45.746537 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.746543 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.746549 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.746555 | orchestrator | 2025-06-02 20:08:45.746561 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 20:08:45.746567 | orchestrator | Monday 02 June 2025 20:01:36 +0000 (0:00:01.315) 0:03:24.893 *********** 2025-06-02 20:08:45.746574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.746580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.746586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.746592 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.746598 | orchestrator | 2025-06-02 20:08:45.746604 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 20:08:45.746610 | orchestrator | Monday 02 June 2025 20:01:37 +0000 (0:00:00.827) 0:03:25.720 *********** 2025-06-02 20:08:45.746616 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.746622 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.746629 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.746635 | orchestrator | 2025-06-02 20:08:45.746641 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-02 20:08:45.746647 | orchestrator | Monday 02 June 2025 20:01:37 +0000 (0:00:00.505) 0:03:26.225 *********** 2025-06-02 20:08:45.746654 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.746660 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.746666 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.746672 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.746677 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.746684 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.746690 | orchestrator | 2025-06-02 20:08:45.746696 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 20:08:45.746701 | orchestrator | Monday 02 June 2025 20:01:38 +0000 (0:00:00.931) 0:03:27.156 *********** 2025-06-02 20:08:45.746730 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.746736 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.746742 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.746763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.746769 | orchestrator | 2025-06-02 20:08:45.746775 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 20:08:45.746781 | orchestrator | Monday 02 June 2025 20:01:39 +0000 (0:00:01.031) 0:03:28.188 *********** 2025-06-02 20:08:45.746787 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.746793 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.746799 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.746805 | orchestrator | 2025-06-02 20:08:45.746811 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 20:08:45.746817 | orchestrator | Monday 02 June 2025 20:01:40 +0000 (0:00:00.349) 0:03:28.538 *********** 2025-06-02 20:08:45.746823 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.746830 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.746835 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.746841 | orchestrator | 2025-06-02 20:08:45.746848 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 20:08:45.746853 | orchestrator | Monday 02 June 2025 20:01:41 +0000 (0:00:01.293) 0:03:29.831 *********** 2025-06-02 20:08:45.746859 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:08:45.746865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:08:45.746872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:08:45.746878 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.746884 | orchestrator | 2025-06-02 20:08:45.746897 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 20:08:45.746903 | orchestrator | Monday 02 June 2025 20:01:42 +0000 (0:00:00.826) 0:03:30.658 *********** 2025-06-02 20:08:45.746910 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.746916 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.746922 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.746928 | orchestrator | 2025-06-02 20:08:45.746934 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-02 20:08:45.746940 | orchestrator | 2025-06-02 20:08:45.746946 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:08:45.746952 | orchestrator | Monday 02 June 2025 20:01:43 +0000 (0:00:00.786) 0:03:31.444 *********** 2025-06-02 20:08:45.746959 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.746966 | orchestrator | 2025-06-02 20:08:45.746973 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:08:45.746980 | orchestrator | Monday 02 June 2025 20:01:43 +0000 (0:00:00.540) 0:03:31.985 *********** 2025-06-02 20:08:45.746986 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.746993 | orchestrator | 2025-06-02 20:08:45.746999 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:08:45.747006 | orchestrator | Monday 02 June 2025 20:01:44 +0000 (0:00:00.717) 0:03:32.703 *********** 2025-06-02 20:08:45.747012 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747018 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747024 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747031 | orchestrator | 2025-06-02 20:08:45.747037 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:08:45.747044 | orchestrator | Monday 02 June 2025 20:01:45 +0000 (0:00:00.746) 0:03:33.449 *********** 2025-06-02 20:08:45.747049 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747056 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747063 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747069 | orchestrator | 2025-06-02 20:08:45.747082 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:08:45.747089 | orchestrator | Monday 02 June 2025 20:01:45 +0000 (0:00:00.321) 0:03:33.771 *********** 2025-06-02 20:08:45.747095 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747101 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747108 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747115 | orchestrator | 2025-06-02 20:08:45.747121 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:08:45.747127 | orchestrator | Monday 02 June 2025 20:01:45 +0000 (0:00:00.307) 0:03:34.079 *********** 2025-06-02 20:08:45.747134 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747140 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747146 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747152 | orchestrator | 2025-06-02 20:08:45.747158 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:08:45.747164 | orchestrator | Monday 02 June 2025 20:01:46 +0000 (0:00:00.584) 0:03:34.663 *********** 2025-06-02 20:08:45.747171 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747177 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747183 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747189 | orchestrator | 2025-06-02 20:08:45.747195 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:08:45.747202 | orchestrator | Monday 02 June 2025 20:01:47 +0000 (0:00:00.752) 0:03:35.416 *********** 2025-06-02 20:08:45.747208 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747215 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747221 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747227 | orchestrator | 2025-06-02 20:08:45.747234 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:08:45.747246 | orchestrator | Monday 02 June 2025 20:01:47 +0000 (0:00:00.317) 0:03:35.733 *********** 2025-06-02 20:08:45.747253 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747259 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747265 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747271 | orchestrator | 2025-06-02 20:08:45.747277 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:08:45.747294 | orchestrator | Monday 02 June 2025 20:01:47 +0000 (0:00:00.269) 0:03:36.003 *********** 2025-06-02 20:08:45.747301 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747308 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747315 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747321 | orchestrator | 2025-06-02 20:08:45.747327 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:08:45.747333 | orchestrator | Monday 02 June 2025 20:01:48 +0000 (0:00:00.989) 0:03:36.993 *********** 2025-06-02 20:08:45.747340 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747347 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747353 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747359 | orchestrator | 2025-06-02 20:08:45.747365 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:08:45.747371 | orchestrator | Monday 02 June 2025 20:01:49 +0000 (0:00:00.772) 0:03:37.765 *********** 2025-06-02 20:08:45.747377 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747384 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747389 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747395 | orchestrator | 2025-06-02 20:08:45.747401 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:08:45.747406 | orchestrator | Monday 02 June 2025 20:01:49 +0000 (0:00:00.205) 0:03:37.971 *********** 2025-06-02 20:08:45.747413 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747418 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747424 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747429 | orchestrator | 2025-06-02 20:08:45.747436 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:08:45.747441 | orchestrator | Monday 02 June 2025 20:01:49 +0000 (0:00:00.239) 0:03:38.210 *********** 2025-06-02 20:08:45.747447 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747453 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747465 | orchestrator | 2025-06-02 20:08:45.747473 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:08:45.747479 | orchestrator | Monday 02 June 2025 20:01:50 +0000 (0:00:00.388) 0:03:38.599 *********** 2025-06-02 20:08:45.747485 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747492 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747498 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747504 | orchestrator | 2025-06-02 20:08:45.747509 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:08:45.747515 | orchestrator | Monday 02 June 2025 20:01:50 +0000 (0:00:00.286) 0:03:38.885 *********** 2025-06-02 20:08:45.747522 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747529 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747535 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747541 | orchestrator | 2025-06-02 20:08:45.747548 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:08:45.747554 | orchestrator | Monday 02 June 2025 20:01:50 +0000 (0:00:00.246) 0:03:39.132 *********** 2025-06-02 20:08:45.747560 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747567 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747573 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747579 | orchestrator | 2025-06-02 20:08:45.747586 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:08:45.747592 | orchestrator | Monday 02 June 2025 20:01:51 +0000 (0:00:00.261) 0:03:39.393 *********** 2025-06-02 20:08:45.747610 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747616 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.747622 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.747629 | orchestrator | 2025-06-02 20:08:45.747635 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:08:45.747642 | orchestrator | Monday 02 June 2025 20:01:51 +0000 (0:00:00.401) 0:03:39.795 *********** 2025-06-02 20:08:45.747647 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747653 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747659 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747666 | orchestrator | 2025-06-02 20:08:45.747677 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:08:45.747685 | orchestrator | Monday 02 June 2025 20:01:51 +0000 (0:00:00.257) 0:03:40.053 *********** 2025-06-02 20:08:45.747691 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747698 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747705 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747711 | orchestrator | 2025-06-02 20:08:45.747717 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:08:45.747723 | orchestrator | Monday 02 June 2025 20:01:51 +0000 (0:00:00.284) 0:03:40.337 *********** 2025-06-02 20:08:45.747730 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747736 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747742 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747796 | orchestrator | 2025-06-02 20:08:45.747805 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-02 20:08:45.747812 | orchestrator | Monday 02 June 2025 20:01:52 +0000 (0:00:00.600) 0:03:40.938 *********** 2025-06-02 20:08:45.747818 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747824 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747830 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747837 | orchestrator | 2025-06-02 20:08:45.747843 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-02 20:08:45.747849 | orchestrator | Monday 02 June 2025 20:01:52 +0000 (0:00:00.293) 0:03:41.231 *********** 2025-06-02 20:08:45.747856 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.747863 | orchestrator | 2025-06-02 20:08:45.747869 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-02 20:08:45.747875 | orchestrator | Monday 02 June 2025 20:01:53 +0000 (0:00:00.586) 0:03:41.817 *********** 2025-06-02 20:08:45.747881 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.747888 | orchestrator | 2025-06-02 20:08:45.747894 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-02 20:08:45.747900 | orchestrator | Monday 02 June 2025 20:01:53 +0000 (0:00:00.110) 0:03:41.928 *********** 2025-06-02 20:08:45.747906 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 20:08:45.747913 | orchestrator | 2025-06-02 20:08:45.747926 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-02 20:08:45.747932 | orchestrator | Monday 02 June 2025 20:01:55 +0000 (0:00:01.442) 0:03:43.371 *********** 2025-06-02 20:08:45.747937 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747943 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747950 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747956 | orchestrator | 2025-06-02 20:08:45.747962 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-02 20:08:45.747967 | orchestrator | Monday 02 June 2025 20:01:55 +0000 (0:00:00.374) 0:03:43.745 *********** 2025-06-02 20:08:45.747973 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.747979 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.747986 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.747992 | orchestrator | 2025-06-02 20:08:45.747998 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-02 20:08:45.748004 | orchestrator | Monday 02 June 2025 20:01:55 +0000 (0:00:00.399) 0:03:44.145 *********** 2025-06-02 20:08:45.748017 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748024 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.748029 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.748035 | orchestrator | 2025-06-02 20:08:45.748041 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-02 20:08:45.748046 | orchestrator | Monday 02 June 2025 20:01:56 +0000 (0:00:01.113) 0:03:45.259 *********** 2025-06-02 20:08:45.748051 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748057 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.748063 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.748068 | orchestrator | 2025-06-02 20:08:45.748074 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-02 20:08:45.748080 | orchestrator | Monday 02 June 2025 20:01:57 +0000 (0:00:00.968) 0:03:46.227 *********** 2025-06-02 20:08:45.748087 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748092 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.748097 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.748103 | orchestrator | 2025-06-02 20:08:45.748108 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-02 20:08:45.748113 | orchestrator | Monday 02 June 2025 20:01:58 +0000 (0:00:00.619) 0:03:46.847 *********** 2025-06-02 20:08:45.748119 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.748124 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.748131 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.748136 | orchestrator | 2025-06-02 20:08:45.748142 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-02 20:08:45.748148 | orchestrator | Monday 02 June 2025 20:01:59 +0000 (0:00:00.627) 0:03:47.474 *********** 2025-06-02 20:08:45.748154 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748161 | orchestrator | 2025-06-02 20:08:45.748167 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-02 20:08:45.748173 | orchestrator | Monday 02 June 2025 20:02:00 +0000 (0:00:01.175) 0:03:48.650 *********** 2025-06-02 20:08:45.748178 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.748184 | orchestrator | 2025-06-02 20:08:45.748190 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-02 20:08:45.748196 | orchestrator | Monday 02 June 2025 20:02:00 +0000 (0:00:00.691) 0:03:49.341 *********** 2025-06-02 20:08:45.748201 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:08:45.748207 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.748212 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.748218 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:08:45.748224 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-02 20:08:45.748230 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:08:45.748242 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:08:45.748248 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-02 20:08:45.748254 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:08:45.748260 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-02 20:08:45.748266 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-02 20:08:45.748272 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-02 20:08:45.748279 | orchestrator | 2025-06-02 20:08:45.748285 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-02 20:08:45.748291 | orchestrator | Monday 02 June 2025 20:02:04 +0000 (0:00:03.352) 0:03:52.694 *********** 2025-06-02 20:08:45.748297 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748302 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.748307 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.748313 | orchestrator | 2025-06-02 20:08:45.748325 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-02 20:08:45.748331 | orchestrator | Monday 02 June 2025 20:02:05 +0000 (0:00:01.638) 0:03:54.333 *********** 2025-06-02 20:08:45.748337 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.748342 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.748348 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.748354 | orchestrator | 2025-06-02 20:08:45.748359 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-02 20:08:45.748365 | orchestrator | Monday 02 June 2025 20:02:06 +0000 (0:00:00.316) 0:03:54.649 *********** 2025-06-02 20:08:45.748371 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.748377 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.748382 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.748388 | orchestrator | 2025-06-02 20:08:45.748394 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-02 20:08:45.748400 | orchestrator | Monday 02 June 2025 20:02:06 +0000 (0:00:00.322) 0:03:54.971 *********** 2025-06-02 20:08:45.748406 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748413 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.748419 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.748425 | orchestrator | 2025-06-02 20:08:45.748431 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-02 20:08:45.748446 | orchestrator | Monday 02 June 2025 20:02:08 +0000 (0:00:01.713) 0:03:56.685 *********** 2025-06-02 20:08:45.748453 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748459 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.748465 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.748471 | orchestrator | 2025-06-02 20:08:45.748477 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-02 20:08:45.748484 | orchestrator | Monday 02 June 2025 20:02:09 +0000 (0:00:01.542) 0:03:58.228 *********** 2025-06-02 20:08:45.748490 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.748496 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.748503 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.748509 | orchestrator | 2025-06-02 20:08:45.748515 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-02 20:08:45.748520 | orchestrator | Monday 02 June 2025 20:02:10 +0000 (0:00:00.291) 0:03:58.520 *********** 2025-06-02 20:08:45.748526 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.748532 | orchestrator | 2025-06-02 20:08:45.748538 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-02 20:08:45.748544 | orchestrator | Monday 02 June 2025 20:02:10 +0000 (0:00:00.533) 0:03:59.054 *********** 2025-06-02 20:08:45.748550 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.748557 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.748563 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.748569 | orchestrator | 2025-06-02 20:08:45.748576 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-02 20:08:45.748582 | orchestrator | Monday 02 June 2025 20:02:11 +0000 (0:00:00.555) 0:03:59.610 *********** 2025-06-02 20:08:45.748587 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.748594 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.748600 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.748606 | orchestrator | 2025-06-02 20:08:45.748613 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-02 20:08:45.748619 | orchestrator | Monday 02 June 2025 20:02:11 +0000 (0:00:00.302) 0:03:59.913 *********** 2025-06-02 20:08:45.748626 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.748633 | orchestrator | 2025-06-02 20:08:45.748639 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-02 20:08:45.748646 | orchestrator | Monday 02 June 2025 20:02:12 +0000 (0:00:00.501) 0:04:00.415 *********** 2025-06-02 20:08:45.748658 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748665 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.748672 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.748678 | orchestrator | 2025-06-02 20:08:45.748685 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-02 20:08:45.748691 | orchestrator | Monday 02 June 2025 20:02:13 +0000 (0:00:01.883) 0:04:02.298 *********** 2025-06-02 20:08:45.748698 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748704 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.748710 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.748715 | orchestrator | 2025-06-02 20:08:45.748722 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-02 20:08:45.748728 | orchestrator | Monday 02 June 2025 20:02:15 +0000 (0:00:01.227) 0:04:03.525 *********** 2025-06-02 20:08:45.748734 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748740 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.748764 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.748771 | orchestrator | 2025-06-02 20:08:45.748776 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-02 20:08:45.748782 | orchestrator | Monday 02 June 2025 20:02:16 +0000 (0:00:01.743) 0:04:05.269 *********** 2025-06-02 20:08:45.748788 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.748793 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.748804 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.748810 | orchestrator | 2025-06-02 20:08:45.748817 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-02 20:08:45.748822 | orchestrator | Monday 02 June 2025 20:02:18 +0000 (0:00:01.900) 0:04:07.169 *********** 2025-06-02 20:08:45.748828 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.748834 | orchestrator | 2025-06-02 20:08:45.748841 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-02 20:08:45.748848 | orchestrator | Monday 02 June 2025 20:02:19 +0000 (0:00:00.824) 0:04:07.994 *********** 2025-06-02 20:08:45.748854 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.748860 | orchestrator | 2025-06-02 20:08:45.748866 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-02 20:08:45.748873 | orchestrator | Monday 02 June 2025 20:02:21 +0000 (0:00:01.382) 0:04:09.376 *********** 2025-06-02 20:08:45.748879 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.748885 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.748891 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.748897 | orchestrator | 2025-06-02 20:08:45.748903 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-02 20:08:45.748909 | orchestrator | Monday 02 June 2025 20:02:30 +0000 (0:00:09.455) 0:04:18.832 *********** 2025-06-02 20:08:45.748916 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.748922 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.748928 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.748935 | orchestrator | 2025-06-02 20:08:45.748941 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-02 20:08:45.748948 | orchestrator | Monday 02 June 2025 20:02:30 +0000 (0:00:00.385) 0:04:19.217 *********** 2025-06-02 20:08:45.748963 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00461e918da7c75746d4144f1ca08f7baec9cee4'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-02 20:08:45.748972 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00461e918da7c75746d4144f1ca08f7baec9cee4'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-02 20:08:45.748987 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00461e918da7c75746d4144f1ca08f7baec9cee4'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-02 20:08:45.748995 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00461e918da7c75746d4144f1ca08f7baec9cee4'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-02 20:08:45.749002 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00461e918da7c75746d4144f1ca08f7baec9cee4'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-02 20:08:45.749010 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00461e918da7c75746d4144f1ca08f7baec9cee4'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__00461e918da7c75746d4144f1ca08f7baec9cee4'}])  2025-06-02 20:08:45.749018 | orchestrator | 2025-06-02 20:08:45.749024 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:08:45.749031 | orchestrator | Monday 02 June 2025 20:02:46 +0000 (0:00:15.838) 0:04:35.056 *********** 2025-06-02 20:08:45.749037 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749043 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749050 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749056 | orchestrator | 2025-06-02 20:08:45.749062 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 20:08:45.749068 | orchestrator | Monday 02 June 2025 20:02:47 +0000 (0:00:00.386) 0:04:35.442 *********** 2025-06-02 20:08:45.749074 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.749081 | orchestrator | 2025-06-02 20:08:45.749091 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 20:08:45.749098 | orchestrator | Monday 02 June 2025 20:02:47 +0000 (0:00:00.894) 0:04:36.336 *********** 2025-06-02 20:08:45.749104 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.749110 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.749117 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.749123 | orchestrator | 2025-06-02 20:08:45.749130 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 20:08:45.749136 | orchestrator | Monday 02 June 2025 20:02:48 +0000 (0:00:00.350) 0:04:36.687 *********** 2025-06-02 20:08:45.749142 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749148 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749154 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749161 | orchestrator | 2025-06-02 20:08:45.749167 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 20:08:45.749173 | orchestrator | Monday 02 June 2025 20:02:48 +0000 (0:00:00.395) 0:04:37.083 *********** 2025-06-02 20:08:45.749179 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:08:45.749186 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:08:45.749192 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:08:45.749203 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749209 | orchestrator | 2025-06-02 20:08:45.749216 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 20:08:45.749222 | orchestrator | Monday 02 June 2025 20:02:49 +0000 (0:00:00.939) 0:04:38.023 *********** 2025-06-02 20:08:45.749228 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.749234 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.749240 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.749247 | orchestrator | 2025-06-02 20:08:45.749255 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-02 20:08:45.749260 | orchestrator | 2025-06-02 20:08:45.749265 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:08:45.749272 | orchestrator | Monday 02 June 2025 20:02:50 +0000 (0:00:00.725) 0:04:38.749 *********** 2025-06-02 20:08:45.749282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.749289 | orchestrator | 2025-06-02 20:08:45.749295 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:08:45.749302 | orchestrator | Monday 02 June 2025 20:02:50 +0000 (0:00:00.465) 0:04:39.214 *********** 2025-06-02 20:08:45.749308 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.749314 | orchestrator | 2025-06-02 20:08:45.749320 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:08:45.749327 | orchestrator | Monday 02 June 2025 20:02:51 +0000 (0:00:00.607) 0:04:39.822 *********** 2025-06-02 20:08:45.749333 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.749339 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.749345 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.749352 | orchestrator | 2025-06-02 20:08:45.749358 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:08:45.749364 | orchestrator | Monday 02 June 2025 20:02:52 +0000 (0:00:00.707) 0:04:40.530 *********** 2025-06-02 20:08:45.749371 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749377 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749383 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749389 | orchestrator | 2025-06-02 20:08:45.749394 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:08:45.749401 | orchestrator | Monday 02 June 2025 20:02:52 +0000 (0:00:00.282) 0:04:40.812 *********** 2025-06-02 20:08:45.749407 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749414 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749420 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749426 | orchestrator | 2025-06-02 20:08:45.749432 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:08:45.749438 | orchestrator | Monday 02 June 2025 20:02:52 +0000 (0:00:00.461) 0:04:41.273 *********** 2025-06-02 20:08:45.749445 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749451 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749457 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749463 | orchestrator | 2025-06-02 20:08:45.749470 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:08:45.749476 | orchestrator | Monday 02 June 2025 20:02:53 +0000 (0:00:00.304) 0:04:41.577 *********** 2025-06-02 20:08:45.749482 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.749488 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.749495 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.749501 | orchestrator | 2025-06-02 20:08:45.749507 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:08:45.749513 | orchestrator | Monday 02 June 2025 20:02:53 +0000 (0:00:00.704) 0:04:42.281 *********** 2025-06-02 20:08:45.749519 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749525 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749530 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749541 | orchestrator | 2025-06-02 20:08:45.749547 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:08:45.749553 | orchestrator | Monday 02 June 2025 20:02:54 +0000 (0:00:00.250) 0:04:42.532 *********** 2025-06-02 20:08:45.749558 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749564 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749570 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749575 | orchestrator | 2025-06-02 20:08:45.749581 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:08:45.749586 | orchestrator | Monday 02 June 2025 20:02:54 +0000 (0:00:00.491) 0:04:43.023 *********** 2025-06-02 20:08:45.749592 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.749598 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.749604 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.749611 | orchestrator | 2025-06-02 20:08:45.749621 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:08:45.749628 | orchestrator | Monday 02 June 2025 20:02:55 +0000 (0:00:00.769) 0:04:43.793 *********** 2025-06-02 20:08:45.749634 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.749640 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.749646 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.749653 | orchestrator | 2025-06-02 20:08:45.749659 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:08:45.749665 | orchestrator | Monday 02 June 2025 20:02:56 +0000 (0:00:00.696) 0:04:44.489 *********** 2025-06-02 20:08:45.749671 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749677 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749683 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749690 | orchestrator | 2025-06-02 20:08:45.749696 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:08:45.749702 | orchestrator | Monday 02 June 2025 20:02:56 +0000 (0:00:00.263) 0:04:44.753 *********** 2025-06-02 20:08:45.749708 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.749715 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.749721 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.749727 | orchestrator | 2025-06-02 20:08:45.749733 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:08:45.749739 | orchestrator | Monday 02 June 2025 20:02:56 +0000 (0:00:00.478) 0:04:45.232 *********** 2025-06-02 20:08:45.749745 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749766 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749772 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749779 | orchestrator | 2025-06-02 20:08:45.749785 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:08:45.749791 | orchestrator | Monday 02 June 2025 20:02:57 +0000 (0:00:00.260) 0:04:45.492 *********** 2025-06-02 20:08:45.749797 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749803 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749809 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749815 | orchestrator | 2025-06-02 20:08:45.749822 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:08:45.749828 | orchestrator | Monday 02 June 2025 20:02:57 +0000 (0:00:00.271) 0:04:45.763 *********** 2025-06-02 20:08:45.749839 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749846 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749852 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749858 | orchestrator | 2025-06-02 20:08:45.749864 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:08:45.749871 | orchestrator | Monday 02 June 2025 20:02:57 +0000 (0:00:00.253) 0:04:46.017 *********** 2025-06-02 20:08:45.749877 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749883 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749889 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749895 | orchestrator | 2025-06-02 20:08:45.749909 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:08:45.749915 | orchestrator | Monday 02 June 2025 20:02:58 +0000 (0:00:00.453) 0:04:46.471 *********** 2025-06-02 20:08:45.749922 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.749928 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.749934 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.749940 | orchestrator | 2025-06-02 20:08:45.749946 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:08:45.749953 | orchestrator | Monday 02 June 2025 20:02:58 +0000 (0:00:00.250) 0:04:46.721 *********** 2025-06-02 20:08:45.749959 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.749965 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.749971 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.749978 | orchestrator | 2025-06-02 20:08:45.749984 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:08:45.749990 | orchestrator | Monday 02 June 2025 20:02:58 +0000 (0:00:00.295) 0:04:47.017 *********** 2025-06-02 20:08:45.749996 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.750002 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.750009 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.750135 | orchestrator | 2025-06-02 20:08:45.750141 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:08:45.750145 | orchestrator | Monday 02 June 2025 20:02:59 +0000 (0:00:00.398) 0:04:47.415 *********** 2025-06-02 20:08:45.750149 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.750153 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.750157 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.750161 | orchestrator | 2025-06-02 20:08:45.750165 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-02 20:08:45.750169 | orchestrator | Monday 02 June 2025 20:02:59 +0000 (0:00:00.844) 0:04:48.260 *********** 2025-06-02 20:08:45.750173 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:08:45.750178 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:08:45.750182 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:08:45.750186 | orchestrator | 2025-06-02 20:08:45.750190 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-02 20:08:45.750194 | orchestrator | Monday 02 June 2025 20:03:00 +0000 (0:00:00.609) 0:04:48.870 *********** 2025-06-02 20:08:45.750198 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.750203 | orchestrator | 2025-06-02 20:08:45.750207 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-02 20:08:45.750210 | orchestrator | Monday 02 June 2025 20:03:01 +0000 (0:00:00.520) 0:04:49.390 *********** 2025-06-02 20:08:45.750214 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.750218 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.750222 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.750226 | orchestrator | 2025-06-02 20:08:45.750230 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-02 20:08:45.750234 | orchestrator | Monday 02 June 2025 20:03:01 +0000 (0:00:00.940) 0:04:50.330 *********** 2025-06-02 20:08:45.750238 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.750242 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.750250 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.750254 | orchestrator | 2025-06-02 20:08:45.750258 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-02 20:08:45.750263 | orchestrator | Monday 02 June 2025 20:03:02 +0000 (0:00:00.323) 0:04:50.654 *********** 2025-06-02 20:08:45.750269 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:08:45.750276 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:08:45.750282 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:08:45.750288 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-02 20:08:45.750299 | orchestrator | 2025-06-02 20:08:45.750307 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-02 20:08:45.750311 | orchestrator | Monday 02 June 2025 20:03:13 +0000 (0:00:10.812) 0:05:01.466 *********** 2025-06-02 20:08:45.750314 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.750318 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.750322 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.750326 | orchestrator | 2025-06-02 20:08:45.750330 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-02 20:08:45.750333 | orchestrator | Monday 02 June 2025 20:03:13 +0000 (0:00:00.533) 0:05:02.000 *********** 2025-06-02 20:08:45.750337 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 20:08:45.750341 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 20:08:45.750345 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 20:08:45.750349 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 20:08:45.750352 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.750356 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.750360 | orchestrator | 2025-06-02 20:08:45.750364 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-02 20:08:45.750367 | orchestrator | Monday 02 June 2025 20:03:16 +0000 (0:00:03.118) 0:05:05.119 *********** 2025-06-02 20:08:45.750371 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 20:08:45.750375 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 20:08:45.750399 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 20:08:45.750404 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:08:45.750408 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 20:08:45.750412 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 20:08:45.750415 | orchestrator | 2025-06-02 20:08:45.750419 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-02 20:08:45.750423 | orchestrator | Monday 02 June 2025 20:03:18 +0000 (0:00:01.449) 0:05:06.568 *********** 2025-06-02 20:08:45.750426 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.750430 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.750434 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.750438 | orchestrator | 2025-06-02 20:08:45.750442 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-02 20:08:45.750445 | orchestrator | Monday 02 June 2025 20:03:18 +0000 (0:00:00.751) 0:05:07.320 *********** 2025-06-02 20:08:45.750449 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.750453 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.750457 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.750460 | orchestrator | 2025-06-02 20:08:45.750464 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-02 20:08:45.750468 | orchestrator | Monday 02 June 2025 20:03:19 +0000 (0:00:00.322) 0:05:07.643 *********** 2025-06-02 20:08:45.750471 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.750475 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.750479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.750483 | orchestrator | 2025-06-02 20:08:45.750486 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-02 20:08:45.750490 | orchestrator | Monday 02 June 2025 20:03:19 +0000 (0:00:00.248) 0:05:07.892 *********** 2025-06-02 20:08:45.750494 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.750498 | orchestrator | 2025-06-02 20:08:45.750501 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-02 20:08:45.750505 | orchestrator | Monday 02 June 2025 20:03:20 +0000 (0:00:00.632) 0:05:08.524 *********** 2025-06-02 20:08:45.750509 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.750520 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.750524 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.750527 | orchestrator | 2025-06-02 20:08:45.750531 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-02 20:08:45.750535 | orchestrator | Monday 02 June 2025 20:03:20 +0000 (0:00:00.289) 0:05:08.814 *********** 2025-06-02 20:08:45.750539 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.750542 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.750546 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.750550 | orchestrator | 2025-06-02 20:08:45.750553 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-02 20:08:45.750557 | orchestrator | Monday 02 June 2025 20:03:20 +0000 (0:00:00.385) 0:05:09.199 *********** 2025-06-02 20:08:45.750561 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.750565 | orchestrator | 2025-06-02 20:08:45.750569 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-02 20:08:45.750572 | orchestrator | Monday 02 June 2025 20:03:21 +0000 (0:00:00.646) 0:05:09.846 *********** 2025-06-02 20:08:45.750576 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.750580 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.750583 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.750587 | orchestrator | 2025-06-02 20:08:45.750591 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-02 20:08:45.750595 | orchestrator | Monday 02 June 2025 20:03:22 +0000 (0:00:01.217) 0:05:11.063 *********** 2025-06-02 20:08:45.750598 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.750605 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.750609 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.750613 | orchestrator | 2025-06-02 20:08:45.750617 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-02 20:08:45.750620 | orchestrator | Monday 02 June 2025 20:03:23 +0000 (0:00:01.064) 0:05:12.128 *********** 2025-06-02 20:08:45.750624 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.750628 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.750632 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.750635 | orchestrator | 2025-06-02 20:08:45.750639 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-02 20:08:45.750643 | orchestrator | Monday 02 June 2025 20:03:25 +0000 (0:00:02.142) 0:05:14.271 *********** 2025-06-02 20:08:45.750646 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.750650 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.750654 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.750657 | orchestrator | 2025-06-02 20:08:45.750661 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-02 20:08:45.750665 | orchestrator | Monday 02 June 2025 20:03:27 +0000 (0:00:01.994) 0:05:16.266 *********** 2025-06-02 20:08:45.750669 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.750672 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.750676 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-02 20:08:45.750680 | orchestrator | 2025-06-02 20:08:45.750684 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-02 20:08:45.750687 | orchestrator | Monday 02 June 2025 20:03:28 +0000 (0:00:00.388) 0:05:16.654 *********** 2025-06-02 20:08:45.750691 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-02 20:08:45.750695 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-02 20:08:45.750699 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-02 20:08:45.750715 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-02 20:08:45.750719 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-02 20:08:45.750726 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:08:45.750730 | orchestrator | 2025-06-02 20:08:45.750734 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-02 20:08:45.750738 | orchestrator | Monday 02 June 2025 20:03:58 +0000 (0:00:30.219) 0:05:46.873 *********** 2025-06-02 20:08:45.750741 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:08:45.750745 | orchestrator | 2025-06-02 20:08:45.750786 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-02 20:08:45.750790 | orchestrator | Monday 02 June 2025 20:04:00 +0000 (0:00:01.582) 0:05:48.456 *********** 2025-06-02 20:08:45.750794 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.750798 | orchestrator | 2025-06-02 20:08:45.750802 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-02 20:08:45.750805 | orchestrator | Monday 02 June 2025 20:04:01 +0000 (0:00:00.945) 0:05:49.402 *********** 2025-06-02 20:08:45.750809 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.750813 | orchestrator | 2025-06-02 20:08:45.750817 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-02 20:08:45.750820 | orchestrator | Monday 02 June 2025 20:04:01 +0000 (0:00:00.135) 0:05:49.538 *********** 2025-06-02 20:08:45.750824 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-02 20:08:45.750828 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-02 20:08:45.750832 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-02 20:08:45.750836 | orchestrator | 2025-06-02 20:08:45.750839 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-02 20:08:45.750843 | orchestrator | Monday 02 June 2025 20:04:07 +0000 (0:00:06.565) 0:05:56.104 *********** 2025-06-02 20:08:45.750847 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-02 20:08:45.750851 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-02 20:08:45.750855 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-02 20:08:45.750858 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-02 20:08:45.750862 | orchestrator | 2025-06-02 20:08:45.750866 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:08:45.750870 | orchestrator | Monday 02 June 2025 20:04:12 +0000 (0:00:04.680) 0:06:00.784 *********** 2025-06-02 20:08:45.750873 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.750877 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.750883 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.750889 | orchestrator | 2025-06-02 20:08:45.750896 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 20:08:45.750902 | orchestrator | Monday 02 June 2025 20:04:13 +0000 (0:00:00.925) 0:06:01.709 *********** 2025-06-02 20:08:45.750908 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:08:45.750914 | orchestrator | 2025-06-02 20:08:45.750920 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 20:08:45.750925 | orchestrator | Monday 02 June 2025 20:04:13 +0000 (0:00:00.522) 0:06:02.232 *********** 2025-06-02 20:08:45.750930 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.750936 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.750943 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.750949 | orchestrator | 2025-06-02 20:08:45.750961 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 20:08:45.750966 | orchestrator | Monday 02 June 2025 20:04:14 +0000 (0:00:00.327) 0:06:02.559 *********** 2025-06-02 20:08:45.750970 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.750974 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.750982 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.750986 | orchestrator | 2025-06-02 20:08:45.750990 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 20:08:45.750994 | orchestrator | Monday 02 June 2025 20:04:15 +0000 (0:00:01.432) 0:06:03.992 *********** 2025-06-02 20:08:45.750998 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:08:45.751002 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:08:45.751005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:08:45.751009 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.751013 | orchestrator | 2025-06-02 20:08:45.751017 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 20:08:45.751021 | orchestrator | Monday 02 June 2025 20:04:16 +0000 (0:00:00.663) 0:06:04.656 *********** 2025-06-02 20:08:45.751024 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.751028 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.751032 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.751036 | orchestrator | 2025-06-02 20:08:45.751040 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-02 20:08:45.751044 | orchestrator | 2025-06-02 20:08:45.751048 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:08:45.751051 | orchestrator | Monday 02 June 2025 20:04:16 +0000 (0:00:00.682) 0:06:05.338 *********** 2025-06-02 20:08:45.751055 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.751059 | orchestrator | 2025-06-02 20:08:45.751063 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:08:45.751067 | orchestrator | Monday 02 June 2025 20:04:17 +0000 (0:00:00.752) 0:06:06.091 *********** 2025-06-02 20:08:45.751090 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.751095 | orchestrator | 2025-06-02 20:08:45.751098 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:08:45.751102 | orchestrator | Monday 02 June 2025 20:04:18 +0000 (0:00:00.512) 0:06:06.603 *********** 2025-06-02 20:08:45.751106 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751110 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751114 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751117 | orchestrator | 2025-06-02 20:08:45.751121 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:08:45.751125 | orchestrator | Monday 02 June 2025 20:04:18 +0000 (0:00:00.284) 0:06:06.888 *********** 2025-06-02 20:08:45.751129 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751132 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751136 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751140 | orchestrator | 2025-06-02 20:08:45.751144 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:08:45.751148 | orchestrator | Monday 02 June 2025 20:04:19 +0000 (0:00:00.952) 0:06:07.840 *********** 2025-06-02 20:08:45.751151 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751155 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751159 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751163 | orchestrator | 2025-06-02 20:08:45.751167 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:08:45.751170 | orchestrator | Monday 02 June 2025 20:04:20 +0000 (0:00:00.706) 0:06:08.547 *********** 2025-06-02 20:08:45.751174 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751178 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751182 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751185 | orchestrator | 2025-06-02 20:08:45.751189 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:08:45.751193 | orchestrator | Monday 02 June 2025 20:04:20 +0000 (0:00:00.722) 0:06:09.269 *********** 2025-06-02 20:08:45.751197 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751204 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751208 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751212 | orchestrator | 2025-06-02 20:08:45.751215 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:08:45.751219 | orchestrator | Monday 02 June 2025 20:04:21 +0000 (0:00:00.290) 0:06:09.560 *********** 2025-06-02 20:08:45.751223 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751227 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751231 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751234 | orchestrator | 2025-06-02 20:08:45.751238 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:08:45.751242 | orchestrator | Monday 02 June 2025 20:04:21 +0000 (0:00:00.589) 0:06:10.150 *********** 2025-06-02 20:08:45.751246 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751249 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751253 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751257 | orchestrator | 2025-06-02 20:08:45.751261 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:08:45.751267 | orchestrator | Monday 02 June 2025 20:04:22 +0000 (0:00:00.320) 0:06:10.470 *********** 2025-06-02 20:08:45.751273 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751279 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751285 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751291 | orchestrator | 2025-06-02 20:08:45.751298 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:08:45.751304 | orchestrator | Monday 02 June 2025 20:04:22 +0000 (0:00:00.668) 0:06:11.138 *********** 2025-06-02 20:08:45.751310 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751317 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751322 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751325 | orchestrator | 2025-06-02 20:08:45.751329 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:08:45.751337 | orchestrator | Monday 02 June 2025 20:04:23 +0000 (0:00:00.768) 0:06:11.907 *********** 2025-06-02 20:08:45.751341 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751344 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751348 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751352 | orchestrator | 2025-06-02 20:08:45.751356 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:08:45.751360 | orchestrator | Monday 02 June 2025 20:04:24 +0000 (0:00:00.590) 0:06:12.498 *********** 2025-06-02 20:08:45.751364 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751367 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751371 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751375 | orchestrator | 2025-06-02 20:08:45.751378 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:08:45.751382 | orchestrator | Monday 02 June 2025 20:04:24 +0000 (0:00:00.291) 0:06:12.790 *********** 2025-06-02 20:08:45.751386 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751390 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751394 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751397 | orchestrator | 2025-06-02 20:08:45.751401 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:08:45.751405 | orchestrator | Monday 02 June 2025 20:04:24 +0000 (0:00:00.314) 0:06:13.104 *********** 2025-06-02 20:08:45.751409 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751413 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751416 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751420 | orchestrator | 2025-06-02 20:08:45.751424 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:08:45.751428 | orchestrator | Monday 02 June 2025 20:04:25 +0000 (0:00:00.342) 0:06:13.447 *********** 2025-06-02 20:08:45.751432 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751435 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751443 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751447 | orchestrator | 2025-06-02 20:08:45.751451 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:08:45.751455 | orchestrator | Monday 02 June 2025 20:04:25 +0000 (0:00:00.579) 0:06:14.026 *********** 2025-06-02 20:08:45.751459 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751463 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751466 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751470 | orchestrator | 2025-06-02 20:08:45.751477 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:08:45.751481 | orchestrator | Monday 02 June 2025 20:04:25 +0000 (0:00:00.304) 0:06:14.331 *********** 2025-06-02 20:08:45.751485 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751488 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751492 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751496 | orchestrator | 2025-06-02 20:08:45.751500 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:08:45.751503 | orchestrator | Monday 02 June 2025 20:04:26 +0000 (0:00:00.289) 0:06:14.621 *********** 2025-06-02 20:08:45.751507 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751511 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751515 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751519 | orchestrator | 2025-06-02 20:08:45.751522 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:08:45.751526 | orchestrator | Monday 02 June 2025 20:04:26 +0000 (0:00:00.295) 0:06:14.916 *********** 2025-06-02 20:08:45.751530 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751534 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751537 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751541 | orchestrator | 2025-06-02 20:08:45.751545 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:08:45.751549 | orchestrator | Monday 02 June 2025 20:04:27 +0000 (0:00:00.584) 0:06:15.501 *********** 2025-06-02 20:08:45.751553 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751556 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751560 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751564 | orchestrator | 2025-06-02 20:08:45.751568 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-02 20:08:45.751572 | orchestrator | Monday 02 June 2025 20:04:27 +0000 (0:00:00.529) 0:06:16.030 *********** 2025-06-02 20:08:45.751575 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751579 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751583 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751587 | orchestrator | 2025-06-02 20:08:45.751590 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-02 20:08:45.751594 | orchestrator | Monday 02 June 2025 20:04:27 +0000 (0:00:00.305) 0:06:16.336 *********** 2025-06-02 20:08:45.751598 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 20:08:45.751602 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:08:45.751606 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:08:45.751609 | orchestrator | 2025-06-02 20:08:45.751613 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-02 20:08:45.751617 | orchestrator | Monday 02 June 2025 20:04:28 +0000 (0:00:00.885) 0:06:17.222 *********** 2025-06-02 20:08:45.751621 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.751625 | orchestrator | 2025-06-02 20:08:45.751628 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-02 20:08:45.751632 | orchestrator | Monday 02 June 2025 20:04:29 +0000 (0:00:00.754) 0:06:17.976 *********** 2025-06-02 20:08:45.751636 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751640 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751647 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751650 | orchestrator | 2025-06-02 20:08:45.751654 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-02 20:08:45.751658 | orchestrator | Monday 02 June 2025 20:04:29 +0000 (0:00:00.290) 0:06:18.267 *********** 2025-06-02 20:08:45.751662 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751665 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751669 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751673 | orchestrator | 2025-06-02 20:08:45.751679 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-02 20:08:45.751683 | orchestrator | Monday 02 June 2025 20:04:30 +0000 (0:00:00.382) 0:06:18.649 *********** 2025-06-02 20:08:45.751687 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751691 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751695 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751699 | orchestrator | 2025-06-02 20:08:45.751702 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-02 20:08:45.751706 | orchestrator | Monday 02 June 2025 20:04:31 +0000 (0:00:00.828) 0:06:19.478 *********** 2025-06-02 20:08:45.751710 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.751714 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.751718 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.751721 | orchestrator | 2025-06-02 20:08:45.751725 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-02 20:08:45.751729 | orchestrator | Monday 02 June 2025 20:04:31 +0000 (0:00:00.397) 0:06:19.875 *********** 2025-06-02 20:08:45.751733 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 20:08:45.751737 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 20:08:45.751740 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 20:08:45.751744 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 20:08:45.751765 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 20:08:45.751769 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 20:08:45.751773 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 20:08:45.751776 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 20:08:45.751784 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 20:08:45.751788 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 20:08:45.751792 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 20:08:45.751795 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 20:08:45.751799 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 20:08:45.751803 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 20:08:45.751807 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 20:08:45.751810 | orchestrator | 2025-06-02 20:08:45.751814 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-02 20:08:45.751818 | orchestrator | Monday 02 June 2025 20:04:33 +0000 (0:00:02.208) 0:06:22.083 *********** 2025-06-02 20:08:45.751822 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.751826 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.751829 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.751833 | orchestrator | 2025-06-02 20:08:45.751837 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-02 20:08:45.751841 | orchestrator | Monday 02 June 2025 20:04:34 +0000 (0:00:00.300) 0:06:22.384 *********** 2025-06-02 20:08:45.751847 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.751851 | orchestrator | 2025-06-02 20:08:45.751855 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-02 20:08:45.751859 | orchestrator | Monday 02 June 2025 20:04:34 +0000 (0:00:00.827) 0:06:23.212 *********** 2025-06-02 20:08:45.751863 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 20:08:45.751866 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 20:08:45.751870 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 20:08:45.751874 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-02 20:08:45.751878 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-02 20:08:45.751881 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-02 20:08:45.751885 | orchestrator | 2025-06-02 20:08:45.751889 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-02 20:08:45.751893 | orchestrator | Monday 02 June 2025 20:04:35 +0000 (0:00:00.992) 0:06:24.204 *********** 2025-06-02 20:08:45.751897 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.751903 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:08:45.751910 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:08:45.751916 | orchestrator | 2025-06-02 20:08:45.751922 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-02 20:08:45.751927 | orchestrator | Monday 02 June 2025 20:04:37 +0000 (0:00:02.122) 0:06:26.327 *********** 2025-06-02 20:08:45.751933 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:08:45.751938 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:08:45.751943 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.751949 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:08:45.751955 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 20:08:45.751960 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.751966 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:08:45.751975 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 20:08:45.751981 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.751987 | orchestrator | 2025-06-02 20:08:45.751993 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-02 20:08:45.751999 | orchestrator | Monday 02 June 2025 20:04:39 +0000 (0:00:01.520) 0:06:27.847 *********** 2025-06-02 20:08:45.752005 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:08:45.752012 | orchestrator | 2025-06-02 20:08:45.752017 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-02 20:08:45.752022 | orchestrator | Monday 02 June 2025 20:04:41 +0000 (0:00:02.124) 0:06:29.971 *********** 2025-06-02 20:08:45.752028 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.752034 | orchestrator | 2025-06-02 20:08:45.752039 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-02 20:08:45.752045 | orchestrator | Monday 02 June 2025 20:04:42 +0000 (0:00:00.542) 0:06:30.514 *********** 2025-06-02 20:08:45.752051 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0b573976-5050-5314-b52d-708d81144fb3', 'data_vg': 'ceph-0b573976-5050-5314-b52d-708d81144fb3'}) 2025-06-02 20:08:45.752058 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5468daec-208d-5ea7-b544-bcde6bebed84', 'data_vg': 'ceph-5468daec-208d-5ea7-b544-bcde6bebed84'}) 2025-06-02 20:08:45.752064 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1b51fe1f-19f9-5db6-a741-38088f1d71cf', 'data_vg': 'ceph-1b51fe1f-19f9-5db6-a741-38088f1d71cf'}) 2025-06-02 20:08:45.752074 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1dc535ca-7422-5c6b-b80a-593b3887af48', 'data_vg': 'ceph-1dc535ca-7422-5c6b-b80a-593b3887af48'}) 2025-06-02 20:08:45.752086 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d0ca6db9-1635-53d8-80de-4807c4d987bd', 'data_vg': 'ceph-d0ca6db9-1635-53d8-80de-4807c4d987bd'}) 2025-06-02 20:08:45.752092 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2dc54921-ef42-515a-84de-1f3d0e017dc1', 'data_vg': 'ceph-2dc54921-ef42-515a-84de-1f3d0e017dc1'}) 2025-06-02 20:08:45.752098 | orchestrator | 2025-06-02 20:08:45.752104 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-02 20:08:45.752110 | orchestrator | Monday 02 June 2025 20:05:24 +0000 (0:00:42.843) 0:07:13.357 *********** 2025-06-02 20:08:45.752116 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752121 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.752127 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.752133 | orchestrator | 2025-06-02 20:08:45.752139 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-02 20:08:45.752145 | orchestrator | Monday 02 June 2025 20:05:25 +0000 (0:00:00.560) 0:07:13.918 *********** 2025-06-02 20:08:45.752151 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.752158 | orchestrator | 2025-06-02 20:08:45.752164 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-02 20:08:45.752170 | orchestrator | Monday 02 June 2025 20:05:26 +0000 (0:00:00.577) 0:07:14.496 *********** 2025-06-02 20:08:45.752177 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.752183 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.752189 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.752195 | orchestrator | 2025-06-02 20:08:45.752201 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-02 20:08:45.752207 | orchestrator | Monday 02 June 2025 20:05:26 +0000 (0:00:00.634) 0:07:15.130 *********** 2025-06-02 20:08:45.752213 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.752220 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.752225 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.752231 | orchestrator | 2025-06-02 20:08:45.752237 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-02 20:08:45.752243 | orchestrator | Monday 02 June 2025 20:05:29 +0000 (0:00:02.946) 0:07:18.077 *********** 2025-06-02 20:08:45.752249 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.752255 | orchestrator | 2025-06-02 20:08:45.752260 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-02 20:08:45.752266 | orchestrator | Monday 02 June 2025 20:05:30 +0000 (0:00:00.504) 0:07:18.582 *********** 2025-06-02 20:08:45.752271 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.752277 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.752283 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.752288 | orchestrator | 2025-06-02 20:08:45.752294 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-02 20:08:45.752300 | orchestrator | Monday 02 June 2025 20:05:31 +0000 (0:00:01.175) 0:07:19.757 *********** 2025-06-02 20:08:45.752306 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.752313 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.752318 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.752324 | orchestrator | 2025-06-02 20:08:45.752329 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-02 20:08:45.752336 | orchestrator | Monday 02 June 2025 20:05:32 +0000 (0:00:01.378) 0:07:21.136 *********** 2025-06-02 20:08:45.752342 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.752348 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.752354 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.752361 | orchestrator | 2025-06-02 20:08:45.752366 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-02 20:08:45.752378 | orchestrator | Monday 02 June 2025 20:05:34 +0000 (0:00:01.755) 0:07:22.892 *********** 2025-06-02 20:08:45.752385 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752396 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.752402 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.752409 | orchestrator | 2025-06-02 20:08:45.752415 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-02 20:08:45.752421 | orchestrator | Monday 02 June 2025 20:05:34 +0000 (0:00:00.335) 0:07:23.227 *********** 2025-06-02 20:08:45.752427 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752433 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.752439 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.752446 | orchestrator | 2025-06-02 20:08:45.752452 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-02 20:08:45.752459 | orchestrator | Monday 02 June 2025 20:05:35 +0000 (0:00:00.315) 0:07:23.543 *********** 2025-06-02 20:08:45.752465 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 20:08:45.752471 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-06-02 20:08:45.752478 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-02 20:08:45.752483 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-06-02 20:08:45.752490 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-02 20:08:45.752497 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-06-02 20:08:45.752503 | orchestrator | 2025-06-02 20:08:45.752510 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-02 20:08:45.752516 | orchestrator | Monday 02 June 2025 20:05:36 +0000 (0:00:01.296) 0:07:24.839 *********** 2025-06-02 20:08:45.752521 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 20:08:45.752526 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-02 20:08:45.752532 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-02 20:08:45.752537 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-02 20:08:45.752543 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-02 20:08:45.752548 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-02 20:08:45.752554 | orchestrator | 2025-06-02 20:08:45.752559 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-02 20:08:45.752565 | orchestrator | Monday 02 June 2025 20:05:38 +0000 (0:00:02.174) 0:07:27.013 *********** 2025-06-02 20:08:45.752571 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 20:08:45.752578 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-02 20:08:45.752591 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-02 20:08:45.752598 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-02 20:08:45.752605 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-02 20:08:45.752611 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-02 20:08:45.752617 | orchestrator | 2025-06-02 20:08:45.752624 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-02 20:08:45.752630 | orchestrator | Monday 02 June 2025 20:05:42 +0000 (0:00:03.569) 0:07:30.582 *********** 2025-06-02 20:08:45.752636 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752642 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.752647 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:08:45.752653 | orchestrator | 2025-06-02 20:08:45.752659 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-02 20:08:45.752665 | orchestrator | Monday 02 June 2025 20:05:45 +0000 (0:00:03.123) 0:07:33.706 *********** 2025-06-02 20:08:45.752670 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752676 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.752688 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-02 20:08:45.752694 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:08:45.752700 | orchestrator | 2025-06-02 20:08:45.752716 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-02 20:08:45.752723 | orchestrator | Monday 02 June 2025 20:05:58 +0000 (0:00:12.818) 0:07:46.525 *********** 2025-06-02 20:08:45.752729 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752735 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.752742 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.752772 | orchestrator | 2025-06-02 20:08:45.752779 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:08:45.752785 | orchestrator | Monday 02 June 2025 20:05:58 +0000 (0:00:00.782) 0:07:47.307 *********** 2025-06-02 20:08:45.752791 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752797 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.752803 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.752808 | orchestrator | 2025-06-02 20:08:45.752814 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 20:08:45.752820 | orchestrator | Monday 02 June 2025 20:05:59 +0000 (0:00:00.559) 0:07:47.867 *********** 2025-06-02 20:08:45.752826 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.752832 | orchestrator | 2025-06-02 20:08:45.752839 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 20:08:45.752845 | orchestrator | Monday 02 June 2025 20:06:00 +0000 (0:00:00.526) 0:07:48.393 *********** 2025-06-02 20:08:45.752851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.752857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.752864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.752870 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752876 | orchestrator | 2025-06-02 20:08:45.752881 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 20:08:45.752887 | orchestrator | Monday 02 June 2025 20:06:00 +0000 (0:00:00.379) 0:07:48.772 *********** 2025-06-02 20:08:45.752893 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752898 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.752903 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.752909 | orchestrator | 2025-06-02 20:08:45.752915 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 20:08:45.752920 | orchestrator | Monday 02 June 2025 20:06:00 +0000 (0:00:00.310) 0:07:49.082 *********** 2025-06-02 20:08:45.752931 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752938 | orchestrator | 2025-06-02 20:08:45.752943 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 20:08:45.752948 | orchestrator | Monday 02 June 2025 20:06:00 +0000 (0:00:00.210) 0:07:49.293 *********** 2025-06-02 20:08:45.752954 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752959 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.752965 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.752971 | orchestrator | 2025-06-02 20:08:45.752976 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 20:08:45.752982 | orchestrator | Monday 02 June 2025 20:06:01 +0000 (0:00:00.566) 0:07:49.859 *********** 2025-06-02 20:08:45.752987 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.752992 | orchestrator | 2025-06-02 20:08:45.752998 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 20:08:45.753003 | orchestrator | Monday 02 June 2025 20:06:01 +0000 (0:00:00.207) 0:07:50.067 *********** 2025-06-02 20:08:45.753009 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753014 | orchestrator | 2025-06-02 20:08:45.753020 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 20:08:45.753025 | orchestrator | Monday 02 June 2025 20:06:01 +0000 (0:00:00.226) 0:07:50.293 *********** 2025-06-02 20:08:45.753031 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753037 | orchestrator | 2025-06-02 20:08:45.753042 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 20:08:45.753054 | orchestrator | Monday 02 June 2025 20:06:02 +0000 (0:00:00.141) 0:07:50.435 *********** 2025-06-02 20:08:45.753060 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753065 | orchestrator | 2025-06-02 20:08:45.753071 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 20:08:45.753076 | orchestrator | Monday 02 June 2025 20:06:02 +0000 (0:00:00.224) 0:07:50.659 *********** 2025-06-02 20:08:45.753081 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753087 | orchestrator | 2025-06-02 20:08:45.753093 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 20:08:45.753099 | orchestrator | Monday 02 June 2025 20:06:02 +0000 (0:00:00.221) 0:07:50.880 *********** 2025-06-02 20:08:45.753113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.753120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.753126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.753132 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753137 | orchestrator | 2025-06-02 20:08:45.753143 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 20:08:45.753149 | orchestrator | Monday 02 June 2025 20:06:02 +0000 (0:00:00.374) 0:07:51.255 *********** 2025-06-02 20:08:45.753155 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753160 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.753166 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.753172 | orchestrator | 2025-06-02 20:08:45.753178 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 20:08:45.753184 | orchestrator | Monday 02 June 2025 20:06:03 +0000 (0:00:00.309) 0:07:51.565 *********** 2025-06-02 20:08:45.753190 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753196 | orchestrator | 2025-06-02 20:08:45.753202 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 20:08:45.753208 | orchestrator | Monday 02 June 2025 20:06:04 +0000 (0:00:00.830) 0:07:52.396 *********** 2025-06-02 20:08:45.753213 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753220 | orchestrator | 2025-06-02 20:08:45.753226 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-02 20:08:45.753232 | orchestrator | 2025-06-02 20:08:45.753238 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:08:45.753245 | orchestrator | Monday 02 June 2025 20:06:04 +0000 (0:00:00.738) 0:07:53.134 *********** 2025-06-02 20:08:45.753251 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.753260 | orchestrator | 2025-06-02 20:08:45.753267 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:08:45.753273 | orchestrator | Monday 02 June 2025 20:06:06 +0000 (0:00:01.225) 0:07:54.360 *********** 2025-06-02 20:08:45.753279 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.753285 | orchestrator | 2025-06-02 20:08:45.753292 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:08:45.753298 | orchestrator | Monday 02 June 2025 20:06:07 +0000 (0:00:01.284) 0:07:55.644 *********** 2025-06-02 20:08:45.753305 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753311 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.753317 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.753324 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.753331 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.753337 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.753343 | orchestrator | 2025-06-02 20:08:45.753350 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:08:45.753363 | orchestrator | Monday 02 June 2025 20:06:07 +0000 (0:00:00.670) 0:07:56.315 *********** 2025-06-02 20:08:45.753368 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.753374 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.753379 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.753385 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.753392 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.753398 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.753404 | orchestrator | 2025-06-02 20:08:45.753411 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:08:45.753417 | orchestrator | Monday 02 June 2025 20:06:08 +0000 (0:00:00.865) 0:07:57.180 *********** 2025-06-02 20:08:45.753423 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.753434 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.753441 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.753446 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.753450 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.753454 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.753460 | orchestrator | 2025-06-02 20:08:45.753466 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:08:45.753472 | orchestrator | Monday 02 June 2025 20:06:09 +0000 (0:00:01.171) 0:07:58.352 *********** 2025-06-02 20:08:45.753478 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.753484 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.753490 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.753495 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.753501 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.753506 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.753512 | orchestrator | 2025-06-02 20:08:45.753517 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:08:45.753523 | orchestrator | Monday 02 June 2025 20:06:10 +0000 (0:00:00.870) 0:07:59.222 *********** 2025-06-02 20:08:45.753528 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753534 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.753539 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.753545 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.753552 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.753558 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.753564 | orchestrator | 2025-06-02 20:08:45.753570 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:08:45.753576 | orchestrator | Monday 02 June 2025 20:06:11 +0000 (0:00:00.796) 0:08:00.019 *********** 2025-06-02 20:08:45.753582 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.753587 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.753593 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.753599 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753606 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.753612 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.753618 | orchestrator | 2025-06-02 20:08:45.753624 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:08:45.753630 | orchestrator | Monday 02 June 2025 20:06:12 +0000 (0:00:00.573) 0:08:00.593 *********** 2025-06-02 20:08:45.753644 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.753650 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.753656 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.753661 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753667 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.753674 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.753680 | orchestrator | 2025-06-02 20:08:45.753686 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:08:45.753691 | orchestrator | Monday 02 June 2025 20:06:12 +0000 (0:00:00.755) 0:08:01.349 *********** 2025-06-02 20:08:45.753697 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.753702 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.753708 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.753719 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.753725 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.753731 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.753736 | orchestrator | 2025-06-02 20:08:45.753743 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:08:45.753797 | orchestrator | Monday 02 June 2025 20:06:13 +0000 (0:00:00.986) 0:08:02.335 *********** 2025-06-02 20:08:45.753804 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.753810 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.753816 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.753822 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.753829 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.753835 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.753841 | orchestrator | 2025-06-02 20:08:45.753847 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:08:45.753853 | orchestrator | Monday 02 June 2025 20:06:15 +0000 (0:00:01.247) 0:08:03.583 *********** 2025-06-02 20:08:45.753860 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.753867 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.753873 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.753880 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753886 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.753893 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.753899 | orchestrator | 2025-06-02 20:08:45.753906 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:08:45.753912 | orchestrator | Monday 02 June 2025 20:06:15 +0000 (0:00:00.562) 0:08:04.145 *********** 2025-06-02 20:08:45.753918 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.753925 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.753931 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.753937 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.753943 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.753950 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.753956 | orchestrator | 2025-06-02 20:08:45.753962 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:08:45.753968 | orchestrator | Monday 02 June 2025 20:06:16 +0000 (0:00:00.758) 0:08:04.904 *********** 2025-06-02 20:08:45.753974 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.753980 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.753987 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.753994 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.754000 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.754006 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.754045 | orchestrator | 2025-06-02 20:08:45.754053 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:08:45.754060 | orchestrator | Monday 02 June 2025 20:06:17 +0000 (0:00:00.583) 0:08:05.488 *********** 2025-06-02 20:08:45.754065 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.754071 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.754078 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.754084 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.754090 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.754096 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.754102 | orchestrator | 2025-06-02 20:08:45.754108 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:08:45.754115 | orchestrator | Monday 02 June 2025 20:06:17 +0000 (0:00:00.764) 0:08:06.252 *********** 2025-06-02 20:08:45.754126 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.754133 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.754139 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.754146 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.754152 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.754158 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.754165 | orchestrator | 2025-06-02 20:08:45.754171 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:08:45.754183 | orchestrator | Monday 02 June 2025 20:06:18 +0000 (0:00:00.590) 0:08:06.843 *********** 2025-06-02 20:08:45.754190 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.754196 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.754202 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.754209 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.754215 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.754221 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.754227 | orchestrator | 2025-06-02 20:08:45.754233 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:08:45.754240 | orchestrator | Monday 02 June 2025 20:06:19 +0000 (0:00:00.761) 0:08:07.605 *********** 2025-06-02 20:08:45.754246 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:08:45.754253 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:08:45.754259 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:08:45.754265 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.754272 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.754277 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.754283 | orchestrator | 2025-06-02 20:08:45.754290 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:08:45.754296 | orchestrator | Monday 02 June 2025 20:06:19 +0000 (0:00:00.601) 0:08:08.207 *********** 2025-06-02 20:08:45.754303 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.754309 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.754316 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.754322 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.754328 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.754335 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.754341 | orchestrator | 2025-06-02 20:08:45.754347 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:08:45.754361 | orchestrator | Monday 02 June 2025 20:06:20 +0000 (0:00:00.774) 0:08:08.981 *********** 2025-06-02 20:08:45.754368 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.754374 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.754381 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.754387 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.754393 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.754400 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.754406 | orchestrator | 2025-06-02 20:08:45.754412 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:08:45.754419 | orchestrator | Monday 02 June 2025 20:06:21 +0000 (0:00:00.583) 0:08:09.565 *********** 2025-06-02 20:08:45.754425 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.754431 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.754438 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.754444 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.754451 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.754458 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.754463 | orchestrator | 2025-06-02 20:08:45.754467 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-02 20:08:45.754471 | orchestrator | Monday 02 June 2025 20:06:22 +0000 (0:00:01.194) 0:08:10.759 *********** 2025-06-02 20:08:45.754475 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.754479 | orchestrator | 2025-06-02 20:08:45.754483 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-02 20:08:45.754487 | orchestrator | Monday 02 June 2025 20:06:26 +0000 (0:00:04.335) 0:08:15.094 *********** 2025-06-02 20:08:45.754491 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.754494 | orchestrator | 2025-06-02 20:08:45.754498 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-02 20:08:45.754502 | orchestrator | Monday 02 June 2025 20:06:28 +0000 (0:00:01.960) 0:08:17.055 *********** 2025-06-02 20:08:45.754506 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.754510 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.754518 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.754522 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.754526 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.754529 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.754533 | orchestrator | 2025-06-02 20:08:45.754537 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-02 20:08:45.754541 | orchestrator | Monday 02 June 2025 20:06:30 +0000 (0:00:01.656) 0:08:18.711 *********** 2025-06-02 20:08:45.754545 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.754548 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.754552 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.754556 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.754560 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.754564 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.754567 | orchestrator | 2025-06-02 20:08:45.754571 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-02 20:08:45.754575 | orchestrator | Monday 02 June 2025 20:06:31 +0000 (0:00:00.939) 0:08:19.650 *********** 2025-06-02 20:08:45.754580 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.754585 | orchestrator | 2025-06-02 20:08:45.754588 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-02 20:08:45.754593 | orchestrator | Monday 02 June 2025 20:06:32 +0000 (0:00:01.151) 0:08:20.802 *********** 2025-06-02 20:08:45.754599 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.754605 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.754611 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.754617 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.754623 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.754629 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.754635 | orchestrator | 2025-06-02 20:08:45.754642 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-02 20:08:45.754646 | orchestrator | Monday 02 June 2025 20:06:34 +0000 (0:00:01.732) 0:08:22.534 *********** 2025-06-02 20:08:45.754653 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.754657 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.754661 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.754665 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.754669 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.754672 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.754676 | orchestrator | 2025-06-02 20:08:45.754680 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-02 20:08:45.754684 | orchestrator | Monday 02 June 2025 20:06:37 +0000 (0:00:03.458) 0:08:25.992 *********** 2025-06-02 20:08:45.754688 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.754692 | orchestrator | 2025-06-02 20:08:45.754696 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-02 20:08:45.754700 | orchestrator | Monday 02 June 2025 20:06:38 +0000 (0:00:01.363) 0:08:27.356 *********** 2025-06-02 20:08:45.754703 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.754707 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.754711 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.754715 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.754719 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.754722 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.754726 | orchestrator | 2025-06-02 20:08:45.754730 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-02 20:08:45.754734 | orchestrator | Monday 02 June 2025 20:06:39 +0000 (0:00:00.818) 0:08:28.174 *********** 2025-06-02 20:08:45.754738 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:08:45.754741 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:08:45.754763 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.754768 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:08:45.754772 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.754775 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.754779 | orchestrator | 2025-06-02 20:08:45.754783 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-02 20:08:45.754787 | orchestrator | Monday 02 June 2025 20:06:41 +0000 (0:00:02.115) 0:08:30.290 *********** 2025-06-02 20:08:45.754791 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:08:45.754798 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:08:45.754802 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:08:45.754806 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.754810 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.754813 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.754817 | orchestrator | 2025-06-02 20:08:45.754821 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-02 20:08:45.754825 | orchestrator | 2025-06-02 20:08:45.754829 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:08:45.754832 | orchestrator | Monday 02 June 2025 20:06:42 +0000 (0:00:01.054) 0:08:31.344 *********** 2025-06-02 20:08:45.754836 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.754840 | orchestrator | 2025-06-02 20:08:45.754844 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:08:45.754848 | orchestrator | Monday 02 June 2025 20:06:43 +0000 (0:00:00.479) 0:08:31.823 *********** 2025-06-02 20:08:45.754852 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.754855 | orchestrator | 2025-06-02 20:08:45.754859 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:08:45.754863 | orchestrator | Monday 02 June 2025 20:06:44 +0000 (0:00:00.717) 0:08:32.541 *********** 2025-06-02 20:08:45.754867 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.754870 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.754874 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.754878 | orchestrator | 2025-06-02 20:08:45.754882 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:08:45.754885 | orchestrator | Monday 02 June 2025 20:06:44 +0000 (0:00:00.324) 0:08:32.866 *********** 2025-06-02 20:08:45.754889 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.754893 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.754897 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.754900 | orchestrator | 2025-06-02 20:08:45.754904 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:08:45.754908 | orchestrator | Monday 02 June 2025 20:06:45 +0000 (0:00:00.662) 0:08:33.528 *********** 2025-06-02 20:08:45.754911 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.754915 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.754919 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.754924 | orchestrator | 2025-06-02 20:08:45.754930 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:08:45.754937 | orchestrator | Monday 02 June 2025 20:06:46 +0000 (0:00:00.951) 0:08:34.480 *********** 2025-06-02 20:08:45.754943 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.754949 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.754955 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.754962 | orchestrator | 2025-06-02 20:08:45.754968 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:08:45.754975 | orchestrator | Monday 02 June 2025 20:06:46 +0000 (0:00:00.658) 0:08:35.138 *********** 2025-06-02 20:08:45.754981 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.754985 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.754989 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.754993 | orchestrator | 2025-06-02 20:08:45.754997 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:08:45.755008 | orchestrator | Monday 02 June 2025 20:06:47 +0000 (0:00:00.292) 0:08:35.430 *********** 2025-06-02 20:08:45.755014 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.755021 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.755027 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.755034 | orchestrator | 2025-06-02 20:08:45.755040 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:08:45.755046 | orchestrator | Monday 02 June 2025 20:06:47 +0000 (0:00:00.281) 0:08:35.712 *********** 2025-06-02 20:08:45.755058 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.755064 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.755070 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.755076 | orchestrator | 2025-06-02 20:08:45.755083 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:08:45.755089 | orchestrator | Monday 02 June 2025 20:06:47 +0000 (0:00:00.587) 0:08:36.299 *********** 2025-06-02 20:08:45.755095 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.755101 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.755108 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.755115 | orchestrator | 2025-06-02 20:08:45.755119 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:08:45.755123 | orchestrator | Monday 02 June 2025 20:06:48 +0000 (0:00:00.701) 0:08:37.001 *********** 2025-06-02 20:08:45.755130 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.755136 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.755142 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.755148 | orchestrator | 2025-06-02 20:08:45.755154 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:08:45.755160 | orchestrator | Monday 02 June 2025 20:06:49 +0000 (0:00:00.687) 0:08:37.688 *********** 2025-06-02 20:08:45.755167 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.755173 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.755179 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.755185 | orchestrator | 2025-06-02 20:08:45.755192 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:08:45.755198 | orchestrator | Monday 02 June 2025 20:06:49 +0000 (0:00:00.286) 0:08:37.975 *********** 2025-06-02 20:08:45.755204 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.755210 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.755216 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.755220 | orchestrator | 2025-06-02 20:08:45.755224 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:08:45.755228 | orchestrator | Monday 02 June 2025 20:06:50 +0000 (0:00:00.571) 0:08:38.546 *********** 2025-06-02 20:08:45.755232 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.755235 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.755239 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.755243 | orchestrator | 2025-06-02 20:08:45.755250 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:08:45.755254 | orchestrator | Monday 02 June 2025 20:06:50 +0000 (0:00:00.327) 0:08:38.874 *********** 2025-06-02 20:08:45.755258 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.755262 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.755268 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.755275 | orchestrator | 2025-06-02 20:08:45.755281 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:08:45.755287 | orchestrator | Monday 02 June 2025 20:06:50 +0000 (0:00:00.305) 0:08:39.180 *********** 2025-06-02 20:08:45.755293 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.755299 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.755305 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.755312 | orchestrator | 2025-06-02 20:08:45.755318 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:08:45.755324 | orchestrator | Monday 02 June 2025 20:06:51 +0000 (0:00:00.308) 0:08:39.488 *********** 2025-06-02 20:08:45.755335 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.755342 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.755348 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.755354 | orchestrator | 2025-06-02 20:08:45.755360 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:08:45.755366 | orchestrator | Monday 02 June 2025 20:06:51 +0000 (0:00:00.533) 0:08:40.022 *********** 2025-06-02 20:08:45.755372 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.755379 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.755385 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.755391 | orchestrator | 2025-06-02 20:08:45.755397 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:08:45.755403 | orchestrator | Monday 02 June 2025 20:06:51 +0000 (0:00:00.283) 0:08:40.306 *********** 2025-06-02 20:08:45.755409 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.755416 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.755422 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.755428 | orchestrator | 2025-06-02 20:08:45.755434 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:08:45.755440 | orchestrator | Monday 02 June 2025 20:06:52 +0000 (0:00:00.294) 0:08:40.600 *********** 2025-06-02 20:08:45.755446 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.755452 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.755458 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.755464 | orchestrator | 2025-06-02 20:08:45.755470 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:08:45.755477 | orchestrator | Monday 02 June 2025 20:06:52 +0000 (0:00:00.327) 0:08:40.928 *********** 2025-06-02 20:08:45.755483 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.755489 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.755496 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.755502 | orchestrator | 2025-06-02 20:08:45.755508 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-02 20:08:45.755514 | orchestrator | Monday 02 June 2025 20:06:53 +0000 (0:00:00.746) 0:08:41.675 *********** 2025-06-02 20:08:45.755520 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.755526 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.755532 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-02 20:08:45.755539 | orchestrator | 2025-06-02 20:08:45.755546 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-02 20:08:45.755552 | orchestrator | Monday 02 June 2025 20:06:53 +0000 (0:00:00.368) 0:08:42.043 *********** 2025-06-02 20:08:45.755558 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:08:45.755564 | orchestrator | 2025-06-02 20:08:45.755571 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-02 20:08:45.755577 | orchestrator | Monday 02 June 2025 20:06:55 +0000 (0:00:02.100) 0:08:44.143 *********** 2025-06-02 20:08:45.755585 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-02 20:08:45.755593 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.755599 | orchestrator | 2025-06-02 20:08:45.755606 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-02 20:08:45.755612 | orchestrator | Monday 02 June 2025 20:06:55 +0000 (0:00:00.211) 0:08:44.355 *********** 2025-06-02 20:08:45.755621 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:08:45.755629 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:08:45.755640 | orchestrator | 2025-06-02 20:08:45.755646 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-02 20:08:45.755652 | orchestrator | Monday 02 June 2025 20:07:05 +0000 (0:00:09.065) 0:08:53.420 *********** 2025-06-02 20:08:45.755659 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:08:45.755664 | orchestrator | 2025-06-02 20:08:45.755670 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-02 20:08:45.755676 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:03.461) 0:08:56.882 *********** 2025-06-02 20:08:45.755682 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.755688 | orchestrator | 2025-06-02 20:08:45.755697 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-02 20:08:45.755703 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:00.444) 0:08:57.327 *********** 2025-06-02 20:08:45.755708 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 20:08:45.755714 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 20:08:45.755719 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 20:08:45.755725 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-02 20:08:45.755731 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-02 20:08:45.755736 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-02 20:08:45.755796 | orchestrator | 2025-06-02 20:08:45.755805 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-02 20:08:45.755812 | orchestrator | Monday 02 June 2025 20:07:09 +0000 (0:00:01.021) 0:08:58.348 *********** 2025-06-02 20:08:45.755819 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.755826 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:08:45.755832 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:08:45.755838 | orchestrator | 2025-06-02 20:08:45.755845 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-02 20:08:45.755851 | orchestrator | Monday 02 June 2025 20:07:12 +0000 (0:00:02.411) 0:09:00.759 *********** 2025-06-02 20:08:45.755858 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:08:45.755865 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:08:45.755871 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.755878 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:08:45.755885 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 20:08:45.755891 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.755898 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:08:45.755904 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 20:08:45.755911 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.755917 | orchestrator | 2025-06-02 20:08:45.755924 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-02 20:08:45.755930 | orchestrator | Monday 02 June 2025 20:07:13 +0000 (0:00:01.514) 0:09:02.274 *********** 2025-06-02 20:08:45.755937 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.755943 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.755950 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.755956 | orchestrator | 2025-06-02 20:08:45.755963 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-02 20:08:45.755969 | orchestrator | Monday 02 June 2025 20:07:16 +0000 (0:00:02.884) 0:09:05.159 *********** 2025-06-02 20:08:45.755976 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.755988 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.755994 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.756000 | orchestrator | 2025-06-02 20:08:45.756007 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-02 20:08:45.756013 | orchestrator | Monday 02 June 2025 20:07:17 +0000 (0:00:00.345) 0:09:05.505 *********** 2025-06-02 20:08:45.756021 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.756025 | orchestrator | 2025-06-02 20:08:45.756029 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-02 20:08:45.756033 | orchestrator | Monday 02 June 2025 20:07:17 +0000 (0:00:00.766) 0:09:06.271 *********** 2025-06-02 20:08:45.756039 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.756043 | orchestrator | 2025-06-02 20:08:45.756047 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-02 20:08:45.756051 | orchestrator | Monday 02 June 2025 20:07:18 +0000 (0:00:00.549) 0:09:06.821 *********** 2025-06-02 20:08:45.756054 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.756058 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.756064 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.756070 | orchestrator | 2025-06-02 20:08:45.756076 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-02 20:08:45.756082 | orchestrator | Monday 02 June 2025 20:07:19 +0000 (0:00:01.294) 0:09:08.116 *********** 2025-06-02 20:08:45.756088 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.756094 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.756101 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.756107 | orchestrator | 2025-06-02 20:08:45.756114 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-02 20:08:45.756120 | orchestrator | Monday 02 June 2025 20:07:21 +0000 (0:00:01.539) 0:09:09.655 *********** 2025-06-02 20:08:45.756126 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.756132 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.756138 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.756145 | orchestrator | 2025-06-02 20:08:45.756151 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-02 20:08:45.756157 | orchestrator | Monday 02 June 2025 20:07:23 +0000 (0:00:01.908) 0:09:11.564 *********** 2025-06-02 20:08:45.756163 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.756169 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.756175 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.756181 | orchestrator | 2025-06-02 20:08:45.756188 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-02 20:08:45.756194 | orchestrator | Monday 02 June 2025 20:07:25 +0000 (0:00:02.009) 0:09:13.574 *********** 2025-06-02 20:08:45.756200 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.756206 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.756212 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.756219 | orchestrator | 2025-06-02 20:08:45.756230 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:08:45.756237 | orchestrator | Monday 02 June 2025 20:07:26 +0000 (0:00:01.540) 0:09:15.114 *********** 2025-06-02 20:08:45.756243 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.756249 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.756255 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.756262 | orchestrator | 2025-06-02 20:08:45.756268 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 20:08:45.756274 | orchestrator | Monday 02 June 2025 20:07:27 +0000 (0:00:00.794) 0:09:15.909 *********** 2025-06-02 20:08:45.756281 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.756287 | orchestrator | 2025-06-02 20:08:45.756293 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 20:08:45.756304 | orchestrator | Monday 02 June 2025 20:07:28 +0000 (0:00:00.749) 0:09:16.659 *********** 2025-06-02 20:08:45.756311 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.756317 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.756323 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.756329 | orchestrator | 2025-06-02 20:08:45.756336 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 20:08:45.756341 | orchestrator | Monday 02 June 2025 20:07:28 +0000 (0:00:00.349) 0:09:17.009 *********** 2025-06-02 20:08:45.756347 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.756353 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.756359 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.756366 | orchestrator | 2025-06-02 20:08:45.756372 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 20:08:45.756378 | orchestrator | Monday 02 June 2025 20:07:30 +0000 (0:00:01.354) 0:09:18.363 *********** 2025-06-02 20:08:45.756384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.756390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.756396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.756402 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.756408 | orchestrator | 2025-06-02 20:08:45.756414 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 20:08:45.756420 | orchestrator | Monday 02 June 2025 20:07:31 +0000 (0:00:01.345) 0:09:19.709 *********** 2025-06-02 20:08:45.756447 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.756453 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.756459 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.756466 | orchestrator | 2025-06-02 20:08:45.756472 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 20:08:45.756478 | orchestrator | 2025-06-02 20:08:45.756484 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:08:45.756490 | orchestrator | Monday 02 June 2025 20:07:32 +0000 (0:00:01.263) 0:09:20.972 *********** 2025-06-02 20:08:45.756497 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.756503 | orchestrator | 2025-06-02 20:08:45.756509 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:08:45.756515 | orchestrator | Monday 02 June 2025 20:07:33 +0000 (0:00:00.745) 0:09:21.718 *********** 2025-06-02 20:08:45.756522 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.756528 | orchestrator | 2025-06-02 20:08:45.756534 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:08:45.756540 | orchestrator | Monday 02 June 2025 20:07:34 +0000 (0:00:01.167) 0:09:22.886 *********** 2025-06-02 20:08:45.756546 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.756552 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.756558 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.756565 | orchestrator | 2025-06-02 20:08:45.756575 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:08:45.756581 | orchestrator | Monday 02 June 2025 20:07:34 +0000 (0:00:00.366) 0:09:23.252 *********** 2025-06-02 20:08:45.756588 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.756594 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.756600 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.756607 | orchestrator | 2025-06-02 20:08:45.756613 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:08:45.756620 | orchestrator | Monday 02 June 2025 20:07:35 +0000 (0:00:00.782) 0:09:24.035 *********** 2025-06-02 20:08:45.756626 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.756632 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.756638 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.756648 | orchestrator | 2025-06-02 20:08:45.756655 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:08:45.756661 | orchestrator | Monday 02 June 2025 20:07:36 +0000 (0:00:00.742) 0:09:24.777 *********** 2025-06-02 20:08:45.756667 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.756673 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.756680 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.756686 | orchestrator | 2025-06-02 20:08:45.756692 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:08:45.756698 | orchestrator | Monday 02 June 2025 20:07:37 +0000 (0:00:01.210) 0:09:25.988 *********** 2025-06-02 20:08:45.756705 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.756711 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.756717 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.756723 | orchestrator | 2025-06-02 20:08:45.756729 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:08:45.756736 | orchestrator | Monday 02 June 2025 20:07:37 +0000 (0:00:00.299) 0:09:26.287 *********** 2025-06-02 20:08:45.756742 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.756762 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.756768 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.756774 | orchestrator | 2025-06-02 20:08:45.756781 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:08:45.756791 | orchestrator | Monday 02 June 2025 20:07:38 +0000 (0:00:00.305) 0:09:26.593 *********** 2025-06-02 20:08:45.756798 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.756804 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.756810 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.756816 | orchestrator | 2025-06-02 20:08:45.756822 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:08:45.756829 | orchestrator | Monday 02 June 2025 20:07:38 +0000 (0:00:00.338) 0:09:26.931 *********** 2025-06-02 20:08:45.756835 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.756841 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.756848 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.756854 | orchestrator | 2025-06-02 20:08:45.756860 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:08:45.756866 | orchestrator | Monday 02 June 2025 20:07:39 +0000 (0:00:01.012) 0:09:27.944 *********** 2025-06-02 20:08:45.756872 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.756879 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.756885 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.756891 | orchestrator | 2025-06-02 20:08:45.756897 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:08:45.756903 | orchestrator | Monday 02 June 2025 20:07:40 +0000 (0:00:00.752) 0:09:28.696 *********** 2025-06-02 20:08:45.756909 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.756915 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.756922 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.756928 | orchestrator | 2025-06-02 20:08:45.756934 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:08:45.756940 | orchestrator | Monday 02 June 2025 20:07:40 +0000 (0:00:00.345) 0:09:29.042 *********** 2025-06-02 20:08:45.756945 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.756951 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.756957 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.756963 | orchestrator | 2025-06-02 20:08:45.756969 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:08:45.756975 | orchestrator | Monday 02 June 2025 20:07:40 +0000 (0:00:00.294) 0:09:29.336 *********** 2025-06-02 20:08:45.756982 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.756988 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.756994 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.757001 | orchestrator | 2025-06-02 20:08:45.757007 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:08:45.757019 | orchestrator | Monday 02 June 2025 20:07:41 +0000 (0:00:00.592) 0:09:29.929 *********** 2025-06-02 20:08:45.757025 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.757031 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.757037 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.757043 | orchestrator | 2025-06-02 20:08:45.757049 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:08:45.757055 | orchestrator | Monday 02 June 2025 20:07:41 +0000 (0:00:00.312) 0:09:30.241 *********** 2025-06-02 20:08:45.757062 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.757068 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.757074 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.757080 | orchestrator | 2025-06-02 20:08:45.757086 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:08:45.757093 | orchestrator | Monday 02 June 2025 20:07:42 +0000 (0:00:00.368) 0:09:30.610 *********** 2025-06-02 20:08:45.757099 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.757105 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.757112 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.757118 | orchestrator | 2025-06-02 20:08:45.757124 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:08:45.757130 | orchestrator | Monday 02 June 2025 20:07:42 +0000 (0:00:00.458) 0:09:31.069 *********** 2025-06-02 20:08:45.757136 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.757143 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.757149 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.757155 | orchestrator | 2025-06-02 20:08:45.757161 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:08:45.757172 | orchestrator | Monday 02 June 2025 20:07:43 +0000 (0:00:00.599) 0:09:31.668 *********** 2025-06-02 20:08:45.757179 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.757185 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.757191 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.757197 | orchestrator | 2025-06-02 20:08:45.757203 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:08:45.757210 | orchestrator | Monday 02 June 2025 20:07:43 +0000 (0:00:00.307) 0:09:31.976 *********** 2025-06-02 20:08:45.757216 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.757222 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.757228 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.757234 | orchestrator | 2025-06-02 20:08:45.757241 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:08:45.757247 | orchestrator | Monday 02 June 2025 20:07:43 +0000 (0:00:00.307) 0:09:32.283 *********** 2025-06-02 20:08:45.757253 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.757259 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.757266 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.757272 | orchestrator | 2025-06-02 20:08:45.757278 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-02 20:08:45.757285 | orchestrator | Monday 02 June 2025 20:07:44 +0000 (0:00:00.764) 0:09:33.048 *********** 2025-06-02 20:08:45.757291 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.757297 | orchestrator | 2025-06-02 20:08:45.757305 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 20:08:45.757310 | orchestrator | Monday 02 June 2025 20:07:45 +0000 (0:00:00.511) 0:09:33.559 *********** 2025-06-02 20:08:45.757316 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.757322 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:08:45.757328 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:08:45.757335 | orchestrator | 2025-06-02 20:08:45.757341 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 20:08:45.757356 | orchestrator | Monday 02 June 2025 20:07:47 +0000 (0:00:02.109) 0:09:35.669 *********** 2025-06-02 20:08:45.757363 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:08:45.757370 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:08:45.757376 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.757382 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:08:45.757386 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 20:08:45.757390 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.757394 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:08:45.757398 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 20:08:45.757402 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.757409 | orchestrator | 2025-06-02 20:08:45.757415 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-02 20:08:45.757421 | orchestrator | Monday 02 June 2025 20:07:48 +0000 (0:00:01.408) 0:09:37.078 *********** 2025-06-02 20:08:45.757427 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.757432 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.757438 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.757445 | orchestrator | 2025-06-02 20:08:45.757450 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-02 20:08:45.757456 | orchestrator | Monday 02 June 2025 20:07:49 +0000 (0:00:00.295) 0:09:37.373 *********** 2025-06-02 20:08:45.757462 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.757468 | orchestrator | 2025-06-02 20:08:45.757474 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-02 20:08:45.757480 | orchestrator | Monday 02 June 2025 20:07:49 +0000 (0:00:00.516) 0:09:37.889 *********** 2025-06-02 20:08:45.757486 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.757493 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.757499 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.757505 | orchestrator | 2025-06-02 20:08:45.757511 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-02 20:08:45.757517 | orchestrator | Monday 02 June 2025 20:07:50 +0000 (0:00:01.250) 0:09:39.140 *********** 2025-06-02 20:08:45.757523 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.757529 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 20:08:45.757535 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.757541 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 20:08:45.757547 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.757552 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 20:08:45.757558 | orchestrator | 2025-06-02 20:08:45.757563 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 20:08:45.757573 | orchestrator | Monday 02 June 2025 20:07:55 +0000 (0:00:04.298) 0:09:43.439 *********** 2025-06-02 20:08:45.757579 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.757585 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:08:45.757592 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.757603 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:08:45.757609 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:08:45.757615 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:08:45.757621 | orchestrator | 2025-06-02 20:08:45.757628 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 20:08:45.757634 | orchestrator | Monday 02 June 2025 20:07:57 +0000 (0:00:02.331) 0:09:45.770 *********** 2025-06-02 20:08:45.757640 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:08:45.757646 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.757652 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:08:45.757658 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.757664 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:08:45.757668 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.757672 | orchestrator | 2025-06-02 20:08:45.757675 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-02 20:08:45.757679 | orchestrator | Monday 02 June 2025 20:07:58 +0000 (0:00:01.245) 0:09:47.016 *********** 2025-06-02 20:08:45.757683 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-02 20:08:45.757687 | orchestrator | 2025-06-02 20:08:45.757691 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-02 20:08:45.757694 | orchestrator | Monday 02 June 2025 20:07:58 +0000 (0:00:00.228) 0:09:47.244 *********** 2025-06-02 20:08:45.757702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:08:45.757707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:08:45.757710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:08:45.757714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:08:45.757718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:08:45.757722 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.757726 | orchestrator | 2025-06-02 20:08:45.757729 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-02 20:08:45.757733 | orchestrator | Monday 02 June 2025 20:07:59 +0000 (0:00:01.080) 0:09:48.324 *********** 2025-06-02 20:08:45.757737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:08:45.757741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:08:45.757744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:08:45.757791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:08:45.757796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:08:45.757800 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.757803 | orchestrator | 2025-06-02 20:08:45.757807 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-02 20:08:45.757811 | orchestrator | Monday 02 June 2025 20:08:00 +0000 (0:00:00.589) 0:09:48.914 *********** 2025-06-02 20:08:45.757815 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 20:08:45.757823 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 20:08:45.757827 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 20:08:45.757831 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 20:08:45.757835 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 20:08:45.757838 | orchestrator | 2025-06-02 20:08:45.757842 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-02 20:08:45.757849 | orchestrator | Monday 02 June 2025 20:08:31 +0000 (0:00:30.835) 0:10:19.749 *********** 2025-06-02 20:08:45.757853 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.757857 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.757861 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.757865 | orchestrator | 2025-06-02 20:08:45.757868 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-02 20:08:45.757872 | orchestrator | Monday 02 June 2025 20:08:31 +0000 (0:00:00.272) 0:10:20.022 *********** 2025-06-02 20:08:45.757876 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.757880 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.757884 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.757887 | orchestrator | 2025-06-02 20:08:45.757891 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-02 20:08:45.757895 | orchestrator | Monday 02 June 2025 20:08:31 +0000 (0:00:00.254) 0:10:20.277 *********** 2025-06-02 20:08:45.757899 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.757903 | orchestrator | 2025-06-02 20:08:45.757906 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-02 20:08:45.757910 | orchestrator | Monday 02 June 2025 20:08:32 +0000 (0:00:00.628) 0:10:20.905 *********** 2025-06-02 20:08:45.757914 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.757918 | orchestrator | 2025-06-02 20:08:45.757921 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-02 20:08:45.757925 | orchestrator | Monday 02 June 2025 20:08:32 +0000 (0:00:00.445) 0:10:21.351 *********** 2025-06-02 20:08:45.757929 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.757933 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.757937 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.757940 | orchestrator | 2025-06-02 20:08:45.757944 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-02 20:08:45.757948 | orchestrator | Monday 02 June 2025 20:08:34 +0000 (0:00:01.182) 0:10:22.533 *********** 2025-06-02 20:08:45.757955 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.757959 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.757962 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.757966 | orchestrator | 2025-06-02 20:08:45.757970 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-02 20:08:45.757974 | orchestrator | Monday 02 June 2025 20:08:35 +0000 (0:00:01.514) 0:10:24.048 *********** 2025-06-02 20:08:45.757978 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:08:45.757982 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:08:45.757985 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:08:45.757989 | orchestrator | 2025-06-02 20:08:45.757993 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-02 20:08:45.757997 | orchestrator | Monday 02 June 2025 20:08:37 +0000 (0:00:01.999) 0:10:26.047 *********** 2025-06-02 20:08:45.758005 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.758064 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.758075 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 20:08:45.758080 | orchestrator | 2025-06-02 20:08:45.758087 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:08:45.758093 | orchestrator | Monday 02 June 2025 20:08:40 +0000 (0:00:02.902) 0:10:28.950 *********** 2025-06-02 20:08:45.758100 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.758107 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.758113 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.758119 | orchestrator | 2025-06-02 20:08:45.758126 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 20:08:45.758132 | orchestrator | Monday 02 June 2025 20:08:40 +0000 (0:00:00.328) 0:10:29.278 *********** 2025-06-02 20:08:45.758138 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:08:45.758144 | orchestrator | 2025-06-02 20:08:45.758150 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 20:08:45.758157 | orchestrator | Monday 02 June 2025 20:08:41 +0000 (0:00:00.493) 0:10:29.772 *********** 2025-06-02 20:08:45.758163 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.758169 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.758175 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.758180 | orchestrator | 2025-06-02 20:08:45.758185 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 20:08:45.758191 | orchestrator | Monday 02 June 2025 20:08:41 +0000 (0:00:00.565) 0:10:30.337 *********** 2025-06-02 20:08:45.758197 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.758202 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:08:45.758209 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:08:45.758214 | orchestrator | 2025-06-02 20:08:45.758220 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 20:08:45.758226 | orchestrator | Monday 02 June 2025 20:08:42 +0000 (0:00:00.329) 0:10:30.666 *********** 2025-06-02 20:08:45.758231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:08:45.758237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:08:45.758242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:08:45.758247 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:08:45.758252 | orchestrator | 2025-06-02 20:08:45.758258 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 20:08:45.758263 | orchestrator | Monday 02 June 2025 20:08:42 +0000 (0:00:00.613) 0:10:31.280 *********** 2025-06-02 20:08:45.758275 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:08:45.758281 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:08:45.758286 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:08:45.758291 | orchestrator | 2025-06-02 20:08:45.758297 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:08:45.758302 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-02 20:08:45.758309 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-02 20:08:45.758314 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-02 20:08:45.758320 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-02 20:08:45.758336 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-02 20:08:45.758342 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-02 20:08:45.758348 | orchestrator | 2025-06-02 20:08:45.758354 | orchestrator | 2025-06-02 20:08:45.758360 | orchestrator | 2025-06-02 20:08:45.758365 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:08:45.758371 | orchestrator | Monday 02 June 2025 20:08:43 +0000 (0:00:00.227) 0:10:31.507 *********** 2025-06-02 20:08:45.758377 | orchestrator | =============================================================================== 2025-06-02 20:08:45.758383 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.81s 2025-06-02 20:08:45.758397 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.84s 2025-06-02 20:08:45.758404 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.84s 2025-06-02 20:08:45.758410 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.22s 2025-06-02 20:08:45.758416 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.84s 2025-06-02 20:08:45.758422 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.82s 2025-06-02 20:08:45.758429 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.81s 2025-06-02 20:08:45.758434 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.46s 2025-06-02 20:08:45.758440 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.07s 2025-06-02 20:08:45.758446 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.82s 2025-06-02 20:08:45.758452 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.57s 2025-06-02 20:08:45.758458 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.68s 2025-06-02 20:08:45.758464 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.34s 2025-06-02 20:08:45.758470 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.30s 2025-06-02 20:08:45.758476 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.57s 2025-06-02 20:08:45.758481 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.55s 2025-06-02 20:08:45.758487 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.46s 2025-06-02 20:08:45.758492 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.46s 2025-06-02 20:08:45.758497 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.35s 2025-06-02 20:08:45.758503 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.14s 2025-06-02 20:08:45.758509 | orchestrator | 2025-06-02 20:08:45 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:45.758515 | orchestrator | 2025-06-02 20:08:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:48.783833 | orchestrator | 2025-06-02 20:08:48 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:48.784527 | orchestrator | 2025-06-02 20:08:48 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:08:48.785498 | orchestrator | 2025-06-02 20:08:48 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:48.785521 | orchestrator | 2025-06-02 20:08:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:51.827435 | orchestrator | 2025-06-02 20:08:51 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:51.828266 | orchestrator | 2025-06-02 20:08:51 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:08:51.830879 | orchestrator | 2025-06-02 20:08:51 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:51.831282 | orchestrator | 2025-06-02 20:08:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:54.886783 | orchestrator | 2025-06-02 20:08:54 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:54.886884 | orchestrator | 2025-06-02 20:08:54 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:08:54.888056 | orchestrator | 2025-06-02 20:08:54 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:54.888093 | orchestrator | 2025-06-02 20:08:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:57.928706 | orchestrator | 2025-06-02 20:08:57 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:08:57.929322 | orchestrator | 2025-06-02 20:08:57 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:08:57.932099 | orchestrator | 2025-06-02 20:08:57 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:08:57.932161 | orchestrator | 2025-06-02 20:08:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:00.982880 | orchestrator | 2025-06-02 20:09:00 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:00.984351 | orchestrator | 2025-06-02 20:09:00 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:00.985656 | orchestrator | 2025-06-02 20:09:00 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:00.985862 | orchestrator | 2025-06-02 20:09:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:04.035440 | orchestrator | 2025-06-02 20:09:04 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:04.038446 | orchestrator | 2025-06-02 20:09:04 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:04.041536 | orchestrator | 2025-06-02 20:09:04 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:04.041584 | orchestrator | 2025-06-02 20:09:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:07.084814 | orchestrator | 2025-06-02 20:09:07 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:07.084935 | orchestrator | 2025-06-02 20:09:07 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:07.085436 | orchestrator | 2025-06-02 20:09:07 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:07.085465 | orchestrator | 2025-06-02 20:09:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:10.135398 | orchestrator | 2025-06-02 20:09:10 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:10.137152 | orchestrator | 2025-06-02 20:09:10 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:10.139318 | orchestrator | 2025-06-02 20:09:10 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:10.139364 | orchestrator | 2025-06-02 20:09:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:13.194423 | orchestrator | 2025-06-02 20:09:13 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:13.195219 | orchestrator | 2025-06-02 20:09:13 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:13.197211 | orchestrator | 2025-06-02 20:09:13 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:13.197258 | orchestrator | 2025-06-02 20:09:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:16.249954 | orchestrator | 2025-06-02 20:09:16 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:16.251321 | orchestrator | 2025-06-02 20:09:16 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:16.253369 | orchestrator | 2025-06-02 20:09:16 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:16.253410 | orchestrator | 2025-06-02 20:09:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:19.304032 | orchestrator | 2025-06-02 20:09:19 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:19.305869 | orchestrator | 2025-06-02 20:09:19 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:19.308478 | orchestrator | 2025-06-02 20:09:19 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:19.308558 | orchestrator | 2025-06-02 20:09:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:22.353432 | orchestrator | 2025-06-02 20:09:22 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:22.355239 | orchestrator | 2025-06-02 20:09:22 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:22.357165 | orchestrator | 2025-06-02 20:09:22 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:22.357193 | orchestrator | 2025-06-02 20:09:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:25.411410 | orchestrator | 2025-06-02 20:09:25 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:25.413186 | orchestrator | 2025-06-02 20:09:25 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:25.416523 | orchestrator | 2025-06-02 20:09:25 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:25.416997 | orchestrator | 2025-06-02 20:09:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:28.465447 | orchestrator | 2025-06-02 20:09:28 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:28.465940 | orchestrator | 2025-06-02 20:09:28 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:28.467179 | orchestrator | 2025-06-02 20:09:28 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:28.467258 | orchestrator | 2025-06-02 20:09:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:31.513729 | orchestrator | 2025-06-02 20:09:31 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:31.515405 | orchestrator | 2025-06-02 20:09:31 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:31.517097 | orchestrator | 2025-06-02 20:09:31 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:31.517167 | orchestrator | 2025-06-02 20:09:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:34.554433 | orchestrator | 2025-06-02 20:09:34 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:34.555398 | orchestrator | 2025-06-02 20:09:34 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:34.556494 | orchestrator | 2025-06-02 20:09:34 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:34.556589 | orchestrator | 2025-06-02 20:09:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:37.601837 | orchestrator | 2025-06-02 20:09:37 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:37.603422 | orchestrator | 2025-06-02 20:09:37 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:37.606450 | orchestrator | 2025-06-02 20:09:37 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:37.606530 | orchestrator | 2025-06-02 20:09:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:40.661356 | orchestrator | 2025-06-02 20:09:40 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:40.663238 | orchestrator | 2025-06-02 20:09:40 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:40.665132 | orchestrator | 2025-06-02 20:09:40 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:40.665222 | orchestrator | 2025-06-02 20:09:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:43.708483 | orchestrator | 2025-06-02 20:09:43 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:43.711220 | orchestrator | 2025-06-02 20:09:43 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:43.713244 | orchestrator | 2025-06-02 20:09:43 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:43.714454 | orchestrator | 2025-06-02 20:09:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:46.761445 | orchestrator | 2025-06-02 20:09:46 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:46.764597 | orchestrator | 2025-06-02 20:09:46 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:46.767159 | orchestrator | 2025-06-02 20:09:46 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:46.767834 | orchestrator | 2025-06-02 20:09:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:49.810689 | orchestrator | 2025-06-02 20:09:49 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:49.812872 | orchestrator | 2025-06-02 20:09:49 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:49.814998 | orchestrator | 2025-06-02 20:09:49 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:49.815034 | orchestrator | 2025-06-02 20:09:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:52.862262 | orchestrator | 2025-06-02 20:09:52 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:52.864725 | orchestrator | 2025-06-02 20:09:52 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:52.866138 | orchestrator | 2025-06-02 20:09:52 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:52.866527 | orchestrator | 2025-06-02 20:09:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:55.924507 | orchestrator | 2025-06-02 20:09:55 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:55.925978 | orchestrator | 2025-06-02 20:09:55 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:55.927789 | orchestrator | 2025-06-02 20:09:55 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:55.927891 | orchestrator | 2025-06-02 20:09:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:58.969343 | orchestrator | 2025-06-02 20:09:58 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:09:58.970604 | orchestrator | 2025-06-02 20:09:58 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:09:58.972127 | orchestrator | 2025-06-02 20:09:58 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state STARTED 2025-06-02 20:09:58.972174 | orchestrator | 2025-06-02 20:09:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:02.019135 | orchestrator | 2025-06-02 20:10:02 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:10:02.020063 | orchestrator | 2025-06-02 20:10:02 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:02.021617 | orchestrator | 2025-06-02 20:10:02 | INFO  | Task 1f1f1cfd-78bc-410c-9813-f3188360aad0 is in state SUCCESS 2025-06-02 20:10:02.021871 | orchestrator | 2025-06-02 20:10:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:02.023390 | orchestrator | 2025-06-02 20:10:02.023428 | orchestrator | 2025-06-02 20:10:02.023435 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:10:02.023440 | orchestrator | 2025-06-02 20:10:02.023447 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:10:02.023454 | orchestrator | Monday 02 June 2025 20:07:05 +0000 (0:00:00.195) 0:00:00.195 *********** 2025-06-02 20:10:02.023465 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:02.023476 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:02.023483 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:02.023490 | orchestrator | 2025-06-02 20:10:02.023497 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:10:02.023504 | orchestrator | Monday 02 June 2025 20:07:05 +0000 (0:00:00.230) 0:00:00.426 *********** 2025-06-02 20:10:02.023512 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-02 20:10:02.023520 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-02 20:10:02.023527 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-02 20:10:02.023534 | orchestrator | 2025-06-02 20:10:02.023542 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-02 20:10:02.023550 | orchestrator | 2025-06-02 20:10:02.023557 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 20:10:02.023564 | orchestrator | Monday 02 June 2025 20:07:05 +0000 (0:00:00.336) 0:00:00.763 *********** 2025-06-02 20:10:02.023572 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:10:02.023580 | orchestrator | 2025-06-02 20:10:02.023588 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-02 20:10:02.023596 | orchestrator | Monday 02 June 2025 20:07:06 +0000 (0:00:00.441) 0:00:01.204 *********** 2025-06-02 20:10:02.023607 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 20:10:02.023615 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 20:10:02.023624 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 20:10:02.023631 | orchestrator | 2025-06-02 20:10:02.023639 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-02 20:10:02.023647 | orchestrator | Monday 02 June 2025 20:07:06 +0000 (0:00:00.628) 0:00:01.833 *********** 2025-06-02 20:10:02.023673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.023723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.023747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.023777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.023790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.023804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.023814 | orchestrator | 2025-06-02 20:10:02.023826 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 20:10:02.023833 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:01.447) 0:00:03.280 *********** 2025-06-02 20:10:02.023840 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:10:02.023847 | orchestrator | 2025-06-02 20:10:02.023854 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-02 20:10:02.023861 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:00.472) 0:00:03.752 *********** 2025-06-02 20:10:02.023875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.023884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.023896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.023910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.023924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.023930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.023935 | orchestrator | 2025-06-02 20:10:02.023940 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-02 20:10:02.023945 | orchestrator | Monday 02 June 2025 20:07:11 +0000 (0:00:02.556) 0:00:06.309 *********** 2025-06-02 20:10:02.023956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:10:02.023962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:10:02.023967 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:02.023972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:10:02.023981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:10:02.023987 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:02.023996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:10:02.024006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:10:02.024012 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:02.024017 | orchestrator | 2025-06-02 20:10:02.024023 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-02 20:10:02.024028 | orchestrator | Monday 02 June 2025 20:07:12 +0000 (0:00:01.429) 0:00:07.739 *********** 2025-06-02 20:10:02.024034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:10:02.024044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:10:02.024056 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:02.024066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:10:02.024074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:10:02.024082 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:02.024091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:10:02.024106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:10:02.024119 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:02.024127 | orchestrator | 2025-06-02 20:10:02.024136 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-02 20:10:02.024144 | orchestrator | Monday 02 June 2025 20:07:13 +0000 (0:00:00.847) 0:00:08.587 *********** 2025-06-02 20:10:02.024152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.024166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.024173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.024186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.024192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.024205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.024210 | orchestrator | 2025-06-02 20:10:02.024216 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-02 20:10:02.024221 | orchestrator | Monday 02 June 2025 20:07:16 +0000 (0:00:02.447) 0:00:11.035 *********** 2025-06-02 20:10:02.024226 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:02.024232 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:02.024237 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:02.024243 | orchestrator | 2025-06-02 20:10:02.024248 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-02 20:10:02.024254 | orchestrator | Monday 02 June 2025 20:07:19 +0000 (0:00:03.790) 0:00:14.825 *********** 2025-06-02 20:10:02.024260 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:02.024265 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:02.024271 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:02.024276 | orchestrator | 2025-06-02 20:10:02.024281 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-02 20:10:02.024285 | orchestrator | Monday 02 June 2025 20:07:21 +0000 (0:00:01.808) 0:00:16.634 *********** 2025-06-02 20:10:02.024290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.024302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.024308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:10:02.024316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.024321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.024330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:10:02.024339 | orchestrator | 2025-06-02 20:10:02.024344 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 20:10:02.024348 | orchestrator | Monday 02 June 2025 20:07:23 +0000 (0:00:01.996) 0:00:18.630 *********** 2025-06-02 20:10:02.024353 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:02.024358 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:02.024362 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:02.024367 | orchestrator | 2025-06-02 20:10:02.024371 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 20:10:02.024376 | orchestrator | Monday 02 June 2025 20:07:24 +0000 (0:00:00.292) 0:00:18.923 *********** 2025-06-02 20:10:02.024380 | orchestrator | 2025-06-02 20:10:02.024385 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 20:10:02.024389 | orchestrator | Monday 02 June 2025 20:07:24 +0000 (0:00:00.061) 0:00:18.985 *********** 2025-06-02 20:10:02.024394 | orchestrator | 2025-06-02 20:10:02.024399 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 20:10:02.024403 | orchestrator | Monday 02 June 2025 20:07:24 +0000 (0:00:00.063) 0:00:19.048 *********** 2025-06-02 20:10:02.024408 | orchestrator | 2025-06-02 20:10:02.024412 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-02 20:10:02.024417 | orchestrator | Monday 02 June 2025 20:07:24 +0000 (0:00:00.234) 0:00:19.283 *********** 2025-06-02 20:10:02.024421 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:02.024426 | orchestrator | 2025-06-02 20:10:02.024430 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-02 20:10:02.024438 | orchestrator | Monday 02 June 2025 20:07:24 +0000 (0:00:00.204) 0:00:19.488 *********** 2025-06-02 20:10:02.024442 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:02.024447 | orchestrator | 2025-06-02 20:10:02.024452 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-02 20:10:02.024456 | orchestrator | Monday 02 June 2025 20:07:24 +0000 (0:00:00.194) 0:00:19.682 *********** 2025-06-02 20:10:02.024461 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:02.024465 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:02.024471 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:02.024479 | orchestrator | 2025-06-02 20:10:02.024486 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-02 20:10:02.024494 | orchestrator | Monday 02 June 2025 20:08:29 +0000 (0:01:05.045) 0:01:24.727 *********** 2025-06-02 20:10:02.024501 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:02.024509 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:02.024516 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:02.024523 | orchestrator | 2025-06-02 20:10:02.024530 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 20:10:02.024537 | orchestrator | Monday 02 June 2025 20:09:48 +0000 (0:01:18.826) 0:02:43.553 *********** 2025-06-02 20:10:02.024544 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:10:02.024552 | orchestrator | 2025-06-02 20:10:02.024560 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-02 20:10:02.024573 | orchestrator | Monday 02 June 2025 20:09:49 +0000 (0:00:00.678) 0:02:44.232 *********** 2025-06-02 20:10:02.024581 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:02.024589 | orchestrator | 2025-06-02 20:10:02.024596 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-02 20:10:02.024601 | orchestrator | Monday 02 June 2025 20:09:51 +0000 (0:00:02.347) 0:02:46.579 *********** 2025-06-02 20:10:02.024605 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:02.024610 | orchestrator | 2025-06-02 20:10:02.024615 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-02 20:10:02.024619 | orchestrator | Monday 02 June 2025 20:09:53 +0000 (0:00:02.185) 0:02:48.764 *********** 2025-06-02 20:10:02.024624 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:02.024628 | orchestrator | 2025-06-02 20:10:02.024633 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-02 20:10:02.024638 | orchestrator | Monday 02 June 2025 20:09:56 +0000 (0:00:02.825) 0:02:51.590 *********** 2025-06-02 20:10:02.024642 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:02.024647 | orchestrator | 2025-06-02 20:10:02.024651 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:10:02.024657 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:10:02.024663 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:10:02.024668 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:10:02.024673 | orchestrator | 2025-06-02 20:10:02.024677 | orchestrator | 2025-06-02 20:10:02.024706 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:10:02.024716 | orchestrator | Monday 02 June 2025 20:09:59 +0000 (0:00:02.493) 0:02:54.083 *********** 2025-06-02 20:10:02.024720 | orchestrator | =============================================================================== 2025-06-02 20:10:02.024725 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 78.83s 2025-06-02 20:10:02.024729 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.05s 2025-06-02 20:10:02.024734 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.79s 2025-06-02 20:10:02.024738 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.83s 2025-06-02 20:10:02.024743 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.56s 2025-06-02 20:10:02.024748 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.49s 2025-06-02 20:10:02.024752 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.45s 2025-06-02 20:10:02.024757 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.35s 2025-06-02 20:10:02.024761 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.19s 2025-06-02 20:10:02.024766 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.00s 2025-06-02 20:10:02.024770 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.81s 2025-06-02 20:10:02.024775 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.45s 2025-06-02 20:10:02.024780 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.43s 2025-06-02 20:10:02.024784 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.85s 2025-06-02 20:10:02.024789 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2025-06-02 20:10:02.024793 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.63s 2025-06-02 20:10:02.024798 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-06-02 20:10:02.024810 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2025-06-02 20:10:02.024814 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.36s 2025-06-02 20:10:02.024819 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-06-02 20:10:05.065274 | orchestrator | 2025-06-02 20:10:05 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:10:05.066492 | orchestrator | 2025-06-02 20:10:05 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:05.066618 | orchestrator | 2025-06-02 20:10:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:08.117397 | orchestrator | 2025-06-02 20:10:08 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:10:08.118604 | orchestrator | 2025-06-02 20:10:08 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:08.118754 | orchestrator | 2025-06-02 20:10:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:11.164989 | orchestrator | 2025-06-02 20:10:11 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:10:11.165100 | orchestrator | 2025-06-02 20:10:11 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:11.165112 | orchestrator | 2025-06-02 20:10:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:14.210215 | orchestrator | 2025-06-02 20:10:14 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:10:14.211568 | orchestrator | 2025-06-02 20:10:14 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:14.211613 | orchestrator | 2025-06-02 20:10:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:17.257490 | orchestrator | 2025-06-02 20:10:17 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state STARTED 2025-06-02 20:10:17.259505 | orchestrator | 2025-06-02 20:10:17 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:17.259542 | orchestrator | 2025-06-02 20:10:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:20.311265 | orchestrator | 2025-06-02 20:10:20.311830 | orchestrator | 2025-06-02 20:10:20.311863 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-02 20:10:20.311877 | orchestrator | 2025-06-02 20:10:20.311892 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 20:10:20.311912 | orchestrator | Monday 02 June 2025 20:07:05 +0000 (0:00:00.103) 0:00:00.103 *********** 2025-06-02 20:10:20.311929 | orchestrator | ok: [localhost] => { 2025-06-02 20:10:20.311948 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-02 20:10:20.311964 | orchestrator | } 2025-06-02 20:10:20.311982 | orchestrator | 2025-06-02 20:10:20.312000 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-02 20:10:20.312019 | orchestrator | Monday 02 June 2025 20:07:05 +0000 (0:00:00.029) 0:00:00.132 *********** 2025-06-02 20:10:20.312036 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-02 20:10:20.312059 | orchestrator | ...ignoring 2025-06-02 20:10:20.312078 | orchestrator | 2025-06-02 20:10:20.312097 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-02 20:10:20.312117 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:02.745) 0:00:02.878 *********** 2025-06-02 20:10:20.312137 | orchestrator | skipping: [localhost] 2025-06-02 20:10:20.312156 | orchestrator | 2025-06-02 20:10:20.312175 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-02 20:10:20.312214 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:00.039) 0:00:02.917 *********** 2025-06-02 20:10:20.312226 | orchestrator | ok: [localhost] 2025-06-02 20:10:20.312237 | orchestrator | 2025-06-02 20:10:20.312248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:10:20.312260 | orchestrator | 2025-06-02 20:10:20.312274 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:10:20.312293 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:00.139) 0:00:03.056 *********** 2025-06-02 20:10:20.312311 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.312327 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:20.312343 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:20.312359 | orchestrator | 2025-06-02 20:10:20.312377 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:10:20.312398 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:00.249) 0:00:03.306 *********** 2025-06-02 20:10:20.312418 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 20:10:20.312440 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 20:10:20.312459 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 20:10:20.312478 | orchestrator | 2025-06-02 20:10:20.312491 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 20:10:20.312504 | orchestrator | 2025-06-02 20:10:20.312517 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 20:10:20.312530 | orchestrator | Monday 02 June 2025 20:07:09 +0000 (0:00:00.710) 0:00:04.017 *********** 2025-06-02 20:10:20.312543 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:10:20.312555 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 20:10:20.312568 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 20:10:20.312581 | orchestrator | 2025-06-02 20:10:20.312593 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 20:10:20.312606 | orchestrator | Monday 02 June 2025 20:07:09 +0000 (0:00:00.368) 0:00:04.386 *********** 2025-06-02 20:10:20.312633 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:10:20.312647 | orchestrator | 2025-06-02 20:10:20.312660 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-02 20:10:20.312702 | orchestrator | Monday 02 June 2025 20:07:10 +0000 (0:00:00.508) 0:00:04.895 *********** 2025-06-02 20:10:20.312746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:10:20.312779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:10:20.312809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:10:20.312830 | orchestrator | 2025-06-02 20:10:20.312862 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-02 20:10:20.312893 | orchestrator | Monday 02 June 2025 20:07:13 +0000 (0:00:03.193) 0:00:08.088 *********** 2025-06-02 20:10:20.312913 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.312932 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.312951 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.312969 | orchestrator | 2025-06-02 20:10:20.312987 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-02 20:10:20.313004 | orchestrator | Monday 02 June 2025 20:07:14 +0000 (0:00:00.774) 0:00:08.863 *********** 2025-06-02 20:10:20.313021 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.313040 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.313059 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.313078 | orchestrator | 2025-06-02 20:10:20.313095 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-02 20:10:20.313111 | orchestrator | Monday 02 June 2025 20:07:15 +0000 (0:00:01.493) 0:00:10.357 *********** 2025-06-02 20:10:20.313136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:10:20.313165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:10:20.313196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:10:20.313216 | orchestrator | 2025-06-02 20:10:20.313235 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-02 20:10:20.313252 | orchestrator | Monday 02 June 2025 20:07:20 +0000 (0:00:04.306) 0:00:14.663 *********** 2025-06-02 20:10:20.313271 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.313288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.313303 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.313319 | orchestrator | 2025-06-02 20:10:20.313336 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-02 20:10:20.313358 | orchestrator | Monday 02 June 2025 20:07:21 +0000 (0:00:01.368) 0:00:16.031 *********** 2025-06-02 20:10:20.313375 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.313394 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:20.313412 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:20.313431 | orchestrator | 2025-06-02 20:10:20.313450 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 20:10:20.313470 | orchestrator | Monday 02 June 2025 20:07:25 +0000 (0:00:03.940) 0:00:19.972 *********** 2025-06-02 20:10:20.313489 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:10:20.313508 | orchestrator | 2025-06-02 20:10:20.313528 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 20:10:20.313547 | orchestrator | Monday 02 June 2025 20:07:25 +0000 (0:00:00.602) 0:00:20.578 *********** 2025-06-02 20:10:20.313582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:10:20.313615 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.313652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:10:20.313714 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.313744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:10:20.313774 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.313791 | orchestrator | 2025-06-02 20:10:20.313809 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 20:10:20.313827 | orchestrator | Monday 02 June 2025 20:07:29 +0000 (0:00:03.695) 0:00:24.274 *********** 2025-06-02 20:10:20.313848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:10:20.313868 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.313906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:10:20.313939 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.313961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:10:20.313981 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.314000 | orchestrator | 2025-06-02 20:10:20.314191 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 20:10:20.314242 | orchestrator | Monday 02 June 2025 20:07:32 +0000 (0:00:03.102) 0:00:27.377 *********** 2025-06-02 20:10:20.314276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:10:20.314311 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.314348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:10:20.314367 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.314392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:10:20.314421 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.314439 | orchestrator | 2025-06-02 20:10:20.314457 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-02 20:10:20.314475 | orchestrator | Monday 02 June 2025 20:07:36 +0000 (0:00:03.346) 0:00:30.723 *********** 2025-06-02 20:10:20.314558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-06-02 20:10:20 | INFO  | Task af95d61a-65f2-487f-8e2e-16c1c21bbda6 is in state SUCCESS 2025-06-02 20:10:20.314582 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:10:20.314610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:10:20.314655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:10:20.314702 | orchestrator | 2025-06-02 20:10:20.314721 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-02 20:10:20.314739 | orchestrator | Monday 02 June 2025 20:07:39 +0000 (0:00:03.662) 0:00:34.386 *********** 2025-06-02 20:10:20.314757 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.314775 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:20.314793 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:20.314810 | orchestrator | 2025-06-02 20:10:20.314827 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-02 20:10:20.314842 | orchestrator | Monday 02 June 2025 20:07:40 +0000 (0:00:01.060) 0:00:35.447 *********** 2025-06-02 20:10:20.314860 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.314877 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:20.314894 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:20.314911 | orchestrator | 2025-06-02 20:10:20.314981 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-02 20:10:20.315014 | orchestrator | Monday 02 June 2025 20:07:41 +0000 (0:00:00.438) 0:00:35.885 *********** 2025-06-02 20:10:20.315033 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.315052 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:20.315071 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:20.315090 | orchestrator | 2025-06-02 20:10:20.315110 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-02 20:10:20.315128 | orchestrator | Monday 02 June 2025 20:07:41 +0000 (0:00:00.442) 0:00:36.328 *********** 2025-06-02 20:10:20.315148 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-02 20:10:20.315168 | orchestrator | ...ignoring 2025-06-02 20:10:20.315196 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-02 20:10:20.315215 | orchestrator | ...ignoring 2025-06-02 20:10:20.315288 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-02 20:10:20.315309 | orchestrator | ...ignoring 2025-06-02 20:10:20.315329 | orchestrator | 2025-06-02 20:10:20.315348 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-02 20:10:20.315367 | orchestrator | Monday 02 June 2025 20:07:52 +0000 (0:00:10.964) 0:00:47.292 *********** 2025-06-02 20:10:20.315384 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.315402 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:20.315419 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:20.315437 | orchestrator | 2025-06-02 20:10:20.315457 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-02 20:10:20.315525 | orchestrator | Monday 02 June 2025 20:07:53 +0000 (0:00:00.601) 0:00:47.894 *********** 2025-06-02 20:10:20.315545 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.315563 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.315582 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.315600 | orchestrator | 2025-06-02 20:10:20.315618 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-02 20:10:20.315637 | orchestrator | Monday 02 June 2025 20:07:53 +0000 (0:00:00.383) 0:00:48.277 *********** 2025-06-02 20:10:20.315656 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.315753 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.315774 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.315794 | orchestrator | 2025-06-02 20:10:20.315814 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-02 20:10:20.315828 | orchestrator | Monday 02 June 2025 20:07:54 +0000 (0:00:00.411) 0:00:48.689 *********** 2025-06-02 20:10:20.315847 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.315865 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.315885 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.315905 | orchestrator | 2025-06-02 20:10:20.315919 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-02 20:10:20.315935 | orchestrator | Monday 02 June 2025 20:07:54 +0000 (0:00:00.387) 0:00:49.076 *********** 2025-06-02 20:10:20.315953 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.315972 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:20.316037 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:20.316059 | orchestrator | 2025-06-02 20:10:20.316094 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-02 20:10:20.316113 | orchestrator | Monday 02 June 2025 20:07:55 +0000 (0:00:00.638) 0:00:49.715 *********** 2025-06-02 20:10:20.316131 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.316150 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.316169 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.316188 | orchestrator | 2025-06-02 20:10:20.316207 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 20:10:20.316241 | orchestrator | Monday 02 June 2025 20:07:55 +0000 (0:00:00.467) 0:00:50.182 *********** 2025-06-02 20:10:20.316304 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.316325 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.316343 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-02 20:10:20.316363 | orchestrator | 2025-06-02 20:10:20.316383 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-02 20:10:20.316402 | orchestrator | Monday 02 June 2025 20:07:55 +0000 (0:00:00.352) 0:00:50.535 *********** 2025-06-02 20:10:20.316419 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.316439 | orchestrator | 2025-06-02 20:10:20.316458 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-02 20:10:20.316478 | orchestrator | Monday 02 June 2025 20:08:06 +0000 (0:00:10.267) 0:01:00.802 *********** 2025-06-02 20:10:20.316497 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.316561 | orchestrator | 2025-06-02 20:10:20.316582 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 20:10:20.316602 | orchestrator | Monday 02 June 2025 20:08:06 +0000 (0:00:00.132) 0:01:00.935 *********** 2025-06-02 20:10:20.316621 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.316640 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.316657 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.316702 | orchestrator | 2025-06-02 20:10:20.316770 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-02 20:10:20.316790 | orchestrator | Monday 02 June 2025 20:08:07 +0000 (0:00:00.985) 0:01:01.920 *********** 2025-06-02 20:10:20.316809 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.316828 | orchestrator | 2025-06-02 20:10:20.316848 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-02 20:10:20.316867 | orchestrator | Monday 02 June 2025 20:08:14 +0000 (0:00:07.083) 0:01:09.003 *********** 2025-06-02 20:10:20.316887 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.316906 | orchestrator | 2025-06-02 20:10:20.316924 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-02 20:10:20.316942 | orchestrator | Monday 02 June 2025 20:08:16 +0000 (0:00:02.534) 0:01:11.538 *********** 2025-06-02 20:10:20.317010 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.317031 | orchestrator | 2025-06-02 20:10:20.317050 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-02 20:10:20.317070 | orchestrator | Monday 02 June 2025 20:08:19 +0000 (0:00:02.242) 0:01:13.781 *********** 2025-06-02 20:10:20.317088 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.317106 | orchestrator | 2025-06-02 20:10:20.317388 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-02 20:10:20.317413 | orchestrator | Monday 02 June 2025 20:08:19 +0000 (0:00:00.128) 0:01:13.909 *********** 2025-06-02 20:10:20.317423 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.317433 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.317442 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.317494 | orchestrator | 2025-06-02 20:10:20.317506 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-02 20:10:20.317525 | orchestrator | Monday 02 June 2025 20:08:19 +0000 (0:00:00.501) 0:01:14.410 *********** 2025-06-02 20:10:20.317536 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.317545 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 20:10:20.317556 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:20.317565 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:20.317575 | orchestrator | 2025-06-02 20:10:20.317585 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 20:10:20.317598 | orchestrator | skipping: no hosts matched 2025-06-02 20:10:20.317614 | orchestrator | 2025-06-02 20:10:20.317833 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 20:10:20.317850 | orchestrator | 2025-06-02 20:10:20.317872 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 20:10:20.317882 | orchestrator | Monday 02 June 2025 20:08:20 +0000 (0:00:00.325) 0:01:14.736 *********** 2025-06-02 20:10:20.317892 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:20.317901 | orchestrator | 2025-06-02 20:10:20.317952 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 20:10:20.317963 | orchestrator | Monday 02 June 2025 20:08:38 +0000 (0:00:18.901) 0:01:33.637 *********** 2025-06-02 20:10:20.317973 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:20.317982 | orchestrator | 2025-06-02 20:10:20.317992 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 20:10:20.318002 | orchestrator | Monday 02 June 2025 20:08:59 +0000 (0:00:20.617) 0:01:54.255 *********** 2025-06-02 20:10:20.318011 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:20.318057 | orchestrator | 2025-06-02 20:10:20.318068 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 20:10:20.318077 | orchestrator | 2025-06-02 20:10:20.318087 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 20:10:20.318097 | orchestrator | Monday 02 June 2025 20:09:02 +0000 (0:00:02.491) 0:01:56.747 *********** 2025-06-02 20:10:20.318107 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:20.318116 | orchestrator | 2025-06-02 20:10:20.318126 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 20:10:20.318136 | orchestrator | Monday 02 June 2025 20:09:21 +0000 (0:00:19.750) 0:02:16.497 *********** 2025-06-02 20:10:20.318145 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:20.318155 | orchestrator | 2025-06-02 20:10:20.318165 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 20:10:20.318174 | orchestrator | Monday 02 June 2025 20:09:43 +0000 (0:00:21.588) 0:02:38.086 *********** 2025-06-02 20:10:20.318225 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:20.318237 | orchestrator | 2025-06-02 20:10:20.318247 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 20:10:20.318257 | orchestrator | 2025-06-02 20:10:20.318266 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 20:10:20.318276 | orchestrator | Monday 02 June 2025 20:09:46 +0000 (0:00:02.836) 0:02:40.923 *********** 2025-06-02 20:10:20.318285 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.318295 | orchestrator | 2025-06-02 20:10:20.318304 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 20:10:20.318314 | orchestrator | Monday 02 June 2025 20:09:57 +0000 (0:00:11.246) 0:02:52.169 *********** 2025-06-02 20:10:20.318324 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.318333 | orchestrator | 2025-06-02 20:10:20.318343 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 20:10:20.318352 | orchestrator | Monday 02 June 2025 20:10:03 +0000 (0:00:05.572) 0:02:57.742 *********** 2025-06-02 20:10:20.318362 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.318371 | orchestrator | 2025-06-02 20:10:20.318381 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 20:10:20.318390 | orchestrator | 2025-06-02 20:10:20.318400 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 20:10:20.318409 | orchestrator | Monday 02 June 2025 20:10:05 +0000 (0:00:02.340) 0:03:00.083 *********** 2025-06-02 20:10:20.318419 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:10:20.318453 | orchestrator | 2025-06-02 20:10:20.318466 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-02 20:10:20.318477 | orchestrator | Monday 02 June 2025 20:10:05 +0000 (0:00:00.531) 0:03:00.614 *********** 2025-06-02 20:10:20.318489 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.318499 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.318510 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.318521 | orchestrator | 2025-06-02 20:10:20.318532 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-02 20:10:20.318558 | orchestrator | Monday 02 June 2025 20:10:08 +0000 (0:00:02.326) 0:03:02.941 *********** 2025-06-02 20:10:20.318569 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.318579 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.318590 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.318601 | orchestrator | 2025-06-02 20:10:20.318611 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-02 20:10:20.318646 | orchestrator | Monday 02 June 2025 20:10:10 +0000 (0:00:02.058) 0:03:05.000 *********** 2025-06-02 20:10:20.318659 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.318693 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.318710 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.318727 | orchestrator | 2025-06-02 20:10:20.318744 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-02 20:10:20.318756 | orchestrator | Monday 02 June 2025 20:10:12 +0000 (0:00:02.113) 0:03:07.113 *********** 2025-06-02 20:10:20.318766 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.318776 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.318786 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:20.318795 | orchestrator | 2025-06-02 20:10:20.318805 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-02 20:10:20.318815 | orchestrator | Monday 02 June 2025 20:10:14 +0000 (0:00:02.142) 0:03:09.256 *********** 2025-06-02 20:10:20.318824 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:20.318834 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:20.318844 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:20.318884 | orchestrator | 2025-06-02 20:10:20.318902 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 20:10:20.318912 | orchestrator | Monday 02 June 2025 20:10:17 +0000 (0:00:02.872) 0:03:12.129 *********** 2025-06-02 20:10:20.318922 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:20.318931 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:20.318941 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:20.318950 | orchestrator | 2025-06-02 20:10:20.318960 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:10:20.318970 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 20:10:20.318980 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-02 20:10:20.318991 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 20:10:20.319001 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 20:10:20.319011 | orchestrator | 2025-06-02 20:10:20.319020 | orchestrator | 2025-06-02 20:10:20.319030 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:10:20.319040 | orchestrator | Monday 02 June 2025 20:10:17 +0000 (0:00:00.222) 0:03:12.351 *********** 2025-06-02 20:10:20.319049 | orchestrator | =============================================================================== 2025-06-02 20:10:20.319059 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 42.21s 2025-06-02 20:10:20.319068 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.65s 2025-06-02 20:10:20.319103 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.25s 2025-06-02 20:10:20.319113 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.96s 2025-06-02 20:10:20.319122 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.27s 2025-06-02 20:10:20.319140 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.08s 2025-06-02 20:10:20.319159 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.57s 2025-06-02 20:10:20.319169 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.33s 2025-06-02 20:10:20.319178 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.31s 2025-06-02 20:10:20.319188 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.94s 2025-06-02 20:10:20.319197 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.70s 2025-06-02 20:10:20.319207 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.66s 2025-06-02 20:10:20.319217 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.35s 2025-06-02 20:10:20.319226 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.19s 2025-06-02 20:10:20.319236 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.10s 2025-06-02 20:10:20.319246 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.87s 2025-06-02 20:10:20.319255 | orchestrator | Check MariaDB service --------------------------------------------------- 2.75s 2025-06-02 20:10:20.319265 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.53s 2025-06-02 20:10:20.319274 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.34s 2025-06-02 20:10:20.319310 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.33s 2025-06-02 20:10:20.319320 | orchestrator | 2025-06-02 20:10:20 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:20.319331 | orchestrator | 2025-06-02 20:10:20 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:20.319341 | orchestrator | 2025-06-02 20:10:20 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:20.319351 | orchestrator | 2025-06-02 20:10:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:23.363227 | orchestrator | 2025-06-02 20:10:23 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:23.363582 | orchestrator | 2025-06-02 20:10:23 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:23.364419 | orchestrator | 2025-06-02 20:10:23 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:23.364595 | orchestrator | 2025-06-02 20:10:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:26.414848 | orchestrator | 2025-06-02 20:10:26 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:26.416900 | orchestrator | 2025-06-02 20:10:26 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:26.420400 | orchestrator | 2025-06-02 20:10:26 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:26.420457 | orchestrator | 2025-06-02 20:10:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:29.441538 | orchestrator | 2025-06-02 20:10:29 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:29.442583 | orchestrator | 2025-06-02 20:10:29 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:29.443257 | orchestrator | 2025-06-02 20:10:29 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:29.443312 | orchestrator | 2025-06-02 20:10:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:32.487417 | orchestrator | 2025-06-02 20:10:32 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:32.489420 | orchestrator | 2025-06-02 20:10:32 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:32.491300 | orchestrator | 2025-06-02 20:10:32 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:32.491344 | orchestrator | 2025-06-02 20:10:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:35.539454 | orchestrator | 2025-06-02 20:10:35 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:35.539571 | orchestrator | 2025-06-02 20:10:35 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:35.539979 | orchestrator | 2025-06-02 20:10:35 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:35.540035 | orchestrator | 2025-06-02 20:10:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:38.571851 | orchestrator | 2025-06-02 20:10:38 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:38.572017 | orchestrator | 2025-06-02 20:10:38 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:38.572512 | orchestrator | 2025-06-02 20:10:38 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:38.572537 | orchestrator | 2025-06-02 20:10:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:41.618646 | orchestrator | 2025-06-02 20:10:41 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:41.621705 | orchestrator | 2025-06-02 20:10:41 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:41.623073 | orchestrator | 2025-06-02 20:10:41 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:41.623107 | orchestrator | 2025-06-02 20:10:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:44.655236 | orchestrator | 2025-06-02 20:10:44 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:44.655342 | orchestrator | 2025-06-02 20:10:44 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:44.657470 | orchestrator | 2025-06-02 20:10:44 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:44.657719 | orchestrator | 2025-06-02 20:10:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:47.695446 | orchestrator | 2025-06-02 20:10:47 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:47.697452 | orchestrator | 2025-06-02 20:10:47 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:47.698624 | orchestrator | 2025-06-02 20:10:47 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:47.698781 | orchestrator | 2025-06-02 20:10:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:50.742084 | orchestrator | 2025-06-02 20:10:50 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:50.742169 | orchestrator | 2025-06-02 20:10:50 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:50.742701 | orchestrator | 2025-06-02 20:10:50 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:50.742730 | orchestrator | 2025-06-02 20:10:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:53.786773 | orchestrator | 2025-06-02 20:10:53 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:53.788176 | orchestrator | 2025-06-02 20:10:53 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:53.792231 | orchestrator | 2025-06-02 20:10:53 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:53.792309 | orchestrator | 2025-06-02 20:10:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:56.830426 | orchestrator | 2025-06-02 20:10:56 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state STARTED 2025-06-02 20:10:56.835061 | orchestrator | 2025-06-02 20:10:56 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:56.835136 | orchestrator | 2025-06-02 20:10:56 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:56.835150 | orchestrator | 2025-06-02 20:10:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:59.892193 | orchestrator | 2025-06-02 20:10:59 | INFO  | Task 9fb9d2fa-d36c-429a-bf89-7eeefe94c052 is in state SUCCESS 2025-06-02 20:10:59.894745 | orchestrator | 2025-06-02 20:10:59.894826 | orchestrator | 2025-06-02 20:10:59.894840 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-02 20:10:59.894852 | orchestrator | 2025-06-02 20:10:59.894871 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 20:10:59.894886 | orchestrator | Monday 02 June 2025 20:08:48 +0000 (0:00:00.570) 0:00:00.570 *********** 2025-06-02 20:10:59.894897 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:10:59.894909 | orchestrator | 2025-06-02 20:10:59.894920 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 20:10:59.894931 | orchestrator | Monday 02 June 2025 20:08:48 +0000 (0:00:00.585) 0:00:01.155 *********** 2025-06-02 20:10:59.895049 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.895695 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.895723 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.896069 | orchestrator | 2025-06-02 20:10:59.896083 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 20:10:59.896094 | orchestrator | Monday 02 June 2025 20:08:49 +0000 (0:00:00.649) 0:00:01.804 *********** 2025-06-02 20:10:59.896105 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.896116 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.896127 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.896138 | orchestrator | 2025-06-02 20:10:59.896149 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 20:10:59.896159 | orchestrator | Monday 02 June 2025 20:08:49 +0000 (0:00:00.248) 0:00:02.053 *********** 2025-06-02 20:10:59.896170 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.896181 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.896191 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.896202 | orchestrator | 2025-06-02 20:10:59.896213 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 20:10:59.896224 | orchestrator | Monday 02 June 2025 20:08:50 +0000 (0:00:00.700) 0:00:02.754 *********** 2025-06-02 20:10:59.896235 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.896245 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.896256 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.896267 | orchestrator | 2025-06-02 20:10:59.896277 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 20:10:59.896288 | orchestrator | Monday 02 June 2025 20:08:50 +0000 (0:00:00.263) 0:00:03.017 *********** 2025-06-02 20:10:59.896299 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.896310 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.896320 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.896331 | orchestrator | 2025-06-02 20:10:59.896342 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 20:10:59.896353 | orchestrator | Monday 02 June 2025 20:08:50 +0000 (0:00:00.271) 0:00:03.288 *********** 2025-06-02 20:10:59.896364 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.896374 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.896386 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.896418 | orchestrator | 2025-06-02 20:10:59.896429 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 20:10:59.896440 | orchestrator | Monday 02 June 2025 20:08:51 +0000 (0:00:00.261) 0:00:03.549 *********** 2025-06-02 20:10:59.896451 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.896463 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.896473 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.896485 | orchestrator | 2025-06-02 20:10:59.896503 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 20:10:59.896522 | orchestrator | Monday 02 June 2025 20:08:51 +0000 (0:00:00.351) 0:00:03.901 *********** 2025-06-02 20:10:59.896541 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.896559 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.896577 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.896589 | orchestrator | 2025-06-02 20:10:59.896600 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 20:10:59.896611 | orchestrator | Monday 02 June 2025 20:08:51 +0000 (0:00:00.242) 0:00:04.144 *********** 2025-06-02 20:10:59.896622 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 20:10:59.896633 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:10:59.896644 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:10:59.896681 | orchestrator | 2025-06-02 20:10:59.896701 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 20:10:59.896720 | orchestrator | Monday 02 June 2025 20:08:52 +0000 (0:00:00.533) 0:00:04.677 *********** 2025-06-02 20:10:59.896742 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.896761 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.896774 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.896786 | orchestrator | 2025-06-02 20:10:59.896799 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 20:10:59.896812 | orchestrator | Monday 02 June 2025 20:08:52 +0000 (0:00:00.368) 0:00:05.046 *********** 2025-06-02 20:10:59.896824 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 20:10:59.896846 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:10:59.896858 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:10:59.896868 | orchestrator | 2025-06-02 20:10:59.896879 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 20:10:59.896890 | orchestrator | Monday 02 June 2025 20:08:54 +0000 (0:00:02.106) 0:00:07.153 *********** 2025-06-02 20:10:59.896901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 20:10:59.896912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 20:10:59.896923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 20:10:59.896935 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.896946 | orchestrator | 2025-06-02 20:10:59.896957 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 20:10:59.897083 | orchestrator | Monday 02 June 2025 20:08:55 +0000 (0:00:00.387) 0:00:07.540 *********** 2025-06-02 20:10:59.897100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.897114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.897125 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.897148 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.897159 | orchestrator | 2025-06-02 20:10:59.897170 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 20:10:59.897181 | orchestrator | Monday 02 June 2025 20:08:55 +0000 (0:00:00.743) 0:00:08.283 *********** 2025-06-02 20:10:59.897194 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.897208 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.897220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.897231 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.897242 | orchestrator | 2025-06-02 20:10:59.897253 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 20:10:59.897264 | orchestrator | Monday 02 June 2025 20:08:55 +0000 (0:00:00.146) 0:00:08.430 *********** 2025-06-02 20:10:59.897277 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f56b4da6ac4a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 20:08:53.173937', 'end': '2025-06-02 20:08:53.236738', 'delta': '0:00:00.062801', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f56b4da6ac4a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-02 20:10:59.897297 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '801821aa3d50', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 20:08:53.920560', 'end': '2025-06-02 20:08:53.980072', 'delta': '0:00:00.059512', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['801821aa3d50'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-02 20:10:59.897343 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0a38bfe51fe0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 20:08:54.472511', 'end': '2025-06-02 20:08:54.519245', 'delta': '0:00:00.046734', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0a38bfe51fe0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-02 20:10:59.897364 | orchestrator | 2025-06-02 20:10:59.897375 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 20:10:59.897386 | orchestrator | Monday 02 June 2025 20:08:56 +0000 (0:00:00.349) 0:00:08.780 *********** 2025-06-02 20:10:59.897397 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.897408 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.897419 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.897430 | orchestrator | 2025-06-02 20:10:59.897441 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 20:10:59.897452 | orchestrator | Monday 02 June 2025 20:08:56 +0000 (0:00:00.411) 0:00:09.191 *********** 2025-06-02 20:10:59.897462 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-02 20:10:59.897473 | orchestrator | 2025-06-02 20:10:59.897484 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 20:10:59.897495 | orchestrator | Monday 02 June 2025 20:08:58 +0000 (0:00:01.834) 0:00:11.026 *********** 2025-06-02 20:10:59.897506 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.897517 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.897528 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.897539 | orchestrator | 2025-06-02 20:10:59.897550 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 20:10:59.897561 | orchestrator | Monday 02 June 2025 20:08:58 +0000 (0:00:00.280) 0:00:11.307 *********** 2025-06-02 20:10:59.897571 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.897582 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.897593 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.897603 | orchestrator | 2025-06-02 20:10:59.897614 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 20:10:59.897625 | orchestrator | Monday 02 June 2025 20:08:59 +0000 (0:00:00.400) 0:00:11.707 *********** 2025-06-02 20:10:59.897636 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.897647 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.897696 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.897709 | orchestrator | 2025-06-02 20:10:59.897720 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 20:10:59.897733 | orchestrator | Monday 02 June 2025 20:08:59 +0000 (0:00:00.478) 0:00:12.185 *********** 2025-06-02 20:10:59.897746 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.897758 | orchestrator | 2025-06-02 20:10:59.897771 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 20:10:59.897783 | orchestrator | Monday 02 June 2025 20:08:59 +0000 (0:00:00.123) 0:00:12.309 *********** 2025-06-02 20:10:59.897795 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.897807 | orchestrator | 2025-06-02 20:10:59.897820 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 20:10:59.897832 | orchestrator | Monday 02 June 2025 20:09:00 +0000 (0:00:00.219) 0:00:12.528 *********** 2025-06-02 20:10:59.897845 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.897858 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.897871 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.897883 | orchestrator | 2025-06-02 20:10:59.897896 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 20:10:59.897908 | orchestrator | Monday 02 June 2025 20:09:00 +0000 (0:00:00.306) 0:00:12.835 *********** 2025-06-02 20:10:59.897921 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.897933 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.897945 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.897958 | orchestrator | 2025-06-02 20:10:59.897970 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 20:10:59.897999 | orchestrator | Monday 02 June 2025 20:09:00 +0000 (0:00:00.301) 0:00:13.136 *********** 2025-06-02 20:10:59.898183 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.898208 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.898227 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.898238 | orchestrator | 2025-06-02 20:10:59.898249 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 20:10:59.898261 | orchestrator | Monday 02 June 2025 20:09:01 +0000 (0:00:00.476) 0:00:13.613 *********** 2025-06-02 20:10:59.898272 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.898290 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.898301 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.898312 | orchestrator | 2025-06-02 20:10:59.898323 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 20:10:59.898334 | orchestrator | Monday 02 June 2025 20:09:01 +0000 (0:00:00.296) 0:00:13.910 *********** 2025-06-02 20:10:59.898346 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.898357 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.898367 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.898378 | orchestrator | 2025-06-02 20:10:59.898389 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 20:10:59.898400 | orchestrator | Monday 02 June 2025 20:09:01 +0000 (0:00:00.298) 0:00:14.208 *********** 2025-06-02 20:10:59.898412 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.898423 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.898434 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.898445 | orchestrator | 2025-06-02 20:10:59.898456 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 20:10:59.898515 | orchestrator | Monday 02 June 2025 20:09:02 +0000 (0:00:00.303) 0:00:14.512 *********** 2025-06-02 20:10:59.898528 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.898539 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.898550 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.898561 | orchestrator | 2025-06-02 20:10:59.898572 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 20:10:59.898583 | orchestrator | Monday 02 June 2025 20:09:02 +0000 (0:00:00.465) 0:00:14.977 *********** 2025-06-02 20:10:59.898596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5468daec--208d--5ea7--b544--bcde6bebed84-osd--block--5468daec--208d--5ea7--b544--bcde6bebed84', 'dm-uuid-LVM-WMIWgYgOFk5ve8pvyr1nTHKEdH5fxpgS1EwOKDiC5TmWopEDT2MKqICjuO1Jttyn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d0ca6db9--1635--53d8--80de--4807c4d987bd-osd--block--d0ca6db9--1635--53d8--80de--4807c4d987bd', 'dm-uuid-LVM-ECchxFJiM7QA1jYtezbX90EZmKKpcqLEsHKrqJ11nIfhHbi0lk5eP32LYNNj2Hwy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.898856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b573976--5050--5314--b52d--708d81144fb3-osd--block--0b573976--5050--5314--b52d--708d81144fb3', 'dm-uuid-LVM-1ieb0bhxLuo1kHWLx7lbi5QD13h2huVqw3KwvjcWks8X7FvRPYMdCLNXWvRgVFsa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5468daec--208d--5ea7--b544--bcde6bebed84-osd--block--5468daec--208d--5ea7--b544--bcde6bebed84'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lrBEGl-yw2Y-BdE1-rDP5-YlEE-ZosO-hDZ9bW', 'scsi-0QEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250', 'scsi-SQEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.898924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1dc535ca--7422--5c6b--b80a--593b3887af48-osd--block--1dc535ca--7422--5c6b--b80a--593b3887af48', 'dm-uuid-LVM-LoHkm5olbES90WwMvikiRHIidohw4vuw5S041h1adMdpSXokKEv2Nsailu7a9QH4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.898946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d0ca6db9--1635--53d8--80de--4807c4d987bd-osd--block--d0ca6db9--1635--53d8--80de--4807c4d987bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-feoeHi-pPOh-J9cI-uId5-a6oN-6vwN-1Fyx2n', 'scsi-0QEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773', 'scsi-SQEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335', 'scsi-SQEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899252 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.899283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b51fe1f--19f9--5db6--a741--38088f1d71cf-osd--block--1b51fe1f--19f9--5db6--a741--38088f1d71cf', 'dm-uuid-LVM-GuD4Jm0I7W9dotSu8GihbrGJp815o6d3uFyVPxNhhoeqbWy7mkQpQj1enCIgUfPw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2dc54921--ef42--515a--84de--1f3d0e017dc1-osd--block--2dc54921--ef42--515a--84de--1f3d0e017dc1', 'dm-uuid-LVM-1aGrVgeJpeKfYtgTckKmxRoVB5YYOvQiZIwOikGdOr7fackyeqw1WIXsxOYiO8iB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0b573976--5050--5314--b52d--708d81144fb3-osd--block--0b573976--5050--5314--b52d--708d81144fb3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PTVbW2-YvR3-vTqK-UVZC-wNKM-c7G3-38YEyq', 'scsi-0QEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696', 'scsi-SQEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1dc535ca--7422--5c6b--b80a--593b3887af48-osd--block--1dc535ca--7422--5c6b--b80a--593b3887af48'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Zkzeu-y56r-nEpa-frJC-TkLT-wBpE-VCRmuy', 'scsi-0QEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4', 'scsi-SQEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db', 'scsi-SQEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899679 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.899700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:10:59.899815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part1', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part14', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part15', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part16', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1b51fe1f--19f9--5db6--a741--38088f1d71cf-osd--block--1b51fe1f--19f9--5db6--a741--38088f1d71cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dVFu7w-JCsN-X8aA-UVLS-mzXn-63P3-CNrvfa', 'scsi-0QEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76', 'scsi-SQEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2dc54921--ef42--515a--84de--1f3d0e017dc1-osd--block--2dc54921--ef42--515a--84de--1f3d0e017dc1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DDdNZa-ucWj-2nM9-Whe6-n6xS-1kw3-n4Xe5i', 'scsi-0QEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6', 'scsi-SQEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f', 'scsi-SQEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:10:59.899903 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.899916 | orchestrator | 2025-06-02 20:10:59.899929 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 20:10:59.899942 | orchestrator | Monday 02 June 2025 20:09:03 +0000 (0:00:00.543) 0:00:15.521 *********** 2025-06-02 20:10:59.899955 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5468daec--208d--5ea7--b544--bcde6bebed84-osd--block--5468daec--208d--5ea7--b544--bcde6bebed84', 'dm-uuid-LVM-WMIWgYgOFk5ve8pvyr1nTHKEdH5fxpgS1EwOKDiC5TmWopEDT2MKqICjuO1Jttyn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.899973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d0ca6db9--1635--53d8--80de--4807c4d987bd-osd--block--d0ca6db9--1635--53d8--80de--4807c4d987bd', 'dm-uuid-LVM-ECchxFJiM7QA1jYtezbX90EZmKKpcqLEsHKrqJ11nIfhHbi0lk5eP32LYNNj2Hwy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.899985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.899996 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900071 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a98dc1a-5ef0-44e2-89ee-a4db820b5c80-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5468daec--208d--5ea7--b544--bcde6bebed84-osd--block--5468daec--208d--5ea7--b544--bcde6bebed84'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lrBEGl-yw2Y-BdE1-rDP5-YlEE-ZosO-hDZ9bW', 'scsi-0QEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250', 'scsi-SQEMU_QEMU_HARDDISK_3a656ee4-c3af-49b4-a6f0-0feb15d5e250'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900140 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b573976--5050--5314--b52d--708d81144fb3-osd--block--0b573976--5050--5314--b52d--708d81144fb3', 'dm-uuid-LVM-1ieb0bhxLuo1kHWLx7lbi5QD13h2huVqw3KwvjcWks8X7FvRPYMdCLNXWvRgVFsa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900156 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d0ca6db9--1635--53d8--80de--4807c4d987bd-osd--block--d0ca6db9--1635--53d8--80de--4807c4d987bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-feoeHi-pPOh-J9cI-uId5-a6oN-6vwN-1Fyx2n', 'scsi-0QEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773', 'scsi-SQEMU_QEMU_HARDDISK_0cd9bba3-eceb-4382-8287-3e8628ac0773'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900174 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1dc535ca--7422--5c6b--b80a--593b3887af48-osd--block--1dc535ca--7422--5c6b--b80a--593b3887af48', 'dm-uuid-LVM-LoHkm5olbES90WwMvikiRHIidohw4vuw5S041h1adMdpSXokKEv2Nsailu7a9QH4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900193 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335', 'scsi-SQEMU_QEMU_HARDDISK_117bc598-c43f-4136-b957-2f363a6b8335'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900204 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900216 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900227 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.900239 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900255 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900272 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900296 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900319 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b51fe1f--19f9--5db6--a741--38088f1d71cf-osd--block--1b51fe1f--19f9--5db6--a741--38088f1d71cf', 'dm-uuid-LVM-GuD4Jm0I7W9dotSu8GihbrGJp815o6d3uFyVPxNhhoeqbWy7mkQpQj1enCIgUfPw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900346 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900364 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2dc54921--ef42--515a--84de--1f3d0e017dc1-osd--block--2dc54921--ef42--515a--84de--1f3d0e017dc1', 'dm-uuid-LVM-1aGrVgeJpeKfYtgTckKmxRoVB5YYOvQiZIwOikGdOr7fackyeqw1WIXsxOYiO8iB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900385 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_3109e32e-09f6-49b1-9102-762fc3bfff6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900398 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900415 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0b573976--5050--5314--b52d--708d81144fb3-osd--block--0b573976--5050--5314--b52d--708d81144fb3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PTVbW2-YvR3-vTqK-UVZC-wNKM-c7G3-38YEyq', 'scsi-0QEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696', 'scsi-SQEMU_QEMU_HARDDISK_05600669-f5a9-4eeb-abdf-0ca8c213e696'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900452 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1dc535ca--7422--5c6b--b80a--593b3887af48-osd--block--1dc535ca--7422--5c6b--b80a--593b3887af48'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Zkzeu-y56r-nEpa-frJC-TkLT-wBpE-VCRmuy', 'scsi-0QEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4', 'scsi-SQEMU_QEMU_HARDDISK_79afc6c6-58f6-4307-87e0-09bd0d860ce4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db', 'scsi-SQEMU_QEMU_HARDDISK_67fcb81d-853f-45f3-94a3-23b2668aa3db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900491 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900528 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900539 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.900550 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900562 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900598 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part1', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part14', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part15', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part16', 'scsi-SQEMU_QEMU_HARDDISK_5b521f65-f8fa-490f-8a49-c7f8940b6af3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900617 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1b51fe1f--19f9--5db6--a741--38088f1d71cf-osd--block--1b51fe1f--19f9--5db6--a741--38088f1d71cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dVFu7w-JCsN-X8aA-UVLS-mzXn-63P3-CNrvfa', 'scsi-0QEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76', 'scsi-SQEMU_QEMU_HARDDISK_9ffd9bf2-84a3-4d27-b5f3-3356e7749f76'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900628 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2dc54921--ef42--515a--84de--1f3d0e017dc1-osd--block--2dc54921--ef42--515a--84de--1f3d0e017dc1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DDdNZa-ucWj-2nM9-Whe6-n6xS-1kw3-n4Xe5i', 'scsi-0QEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6', 'scsi-SQEMU_QEMU_HARDDISK_5963bf14-863c-43c8-92fe-9d0d39c425c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900644 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f', 'scsi-SQEMU_QEMU_HARDDISK_5887df38-b3fa-4a4d-abd1-7bd86d74143f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900727 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:10:59.900742 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.900754 | orchestrator | 2025-06-02 20:10:59.900765 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 20:10:59.900776 | orchestrator | Monday 02 June 2025 20:09:03 +0000 (0:00:00.538) 0:00:16.060 *********** 2025-06-02 20:10:59.900787 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.900799 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.900809 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.900820 | orchestrator | 2025-06-02 20:10:59.900831 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 20:10:59.900842 | orchestrator | Monday 02 June 2025 20:09:04 +0000 (0:00:00.657) 0:00:16.717 *********** 2025-06-02 20:10:59.900853 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.900863 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.900874 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.900885 | orchestrator | 2025-06-02 20:10:59.900896 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 20:10:59.900906 | orchestrator | Monday 02 June 2025 20:09:04 +0000 (0:00:00.501) 0:00:17.219 *********** 2025-06-02 20:10:59.900917 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.900928 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.900939 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.900950 | orchestrator | 2025-06-02 20:10:59.900961 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 20:10:59.900971 | orchestrator | Monday 02 June 2025 20:09:05 +0000 (0:00:00.635) 0:00:17.854 *********** 2025-06-02 20:10:59.900982 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.900993 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.901004 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.901015 | orchestrator | 2025-06-02 20:10:59.901026 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 20:10:59.901036 | orchestrator | Monday 02 June 2025 20:09:05 +0000 (0:00:00.285) 0:00:18.140 *********** 2025-06-02 20:10:59.901127 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.901141 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.901151 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.901162 | orchestrator | 2025-06-02 20:10:59.901173 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 20:10:59.901184 | orchestrator | Monday 02 June 2025 20:09:06 +0000 (0:00:00.401) 0:00:18.542 *********** 2025-06-02 20:10:59.901195 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.901206 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.901217 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.901228 | orchestrator | 2025-06-02 20:10:59.901238 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 20:10:59.901247 | orchestrator | Monday 02 June 2025 20:09:06 +0000 (0:00:00.559) 0:00:19.101 *********** 2025-06-02 20:10:59.901264 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 20:10:59.901274 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 20:10:59.901284 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 20:10:59.901293 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 20:10:59.901303 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 20:10:59.901313 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 20:10:59.901322 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 20:10:59.901332 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 20:10:59.901341 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 20:10:59.901351 | orchestrator | 2025-06-02 20:10:59.901369 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 20:10:59.901394 | orchestrator | Monday 02 June 2025 20:09:07 +0000 (0:00:00.805) 0:00:19.907 *********** 2025-06-02 20:10:59.901412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 20:10:59.901429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 20:10:59.901445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 20:10:59.901463 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.901480 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 20:10:59.901497 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 20:10:59.901515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 20:10:59.901532 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.901550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 20:10:59.901563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 20:10:59.901580 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 20:10:59.901590 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.901600 | orchestrator | 2025-06-02 20:10:59.901609 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 20:10:59.901619 | orchestrator | Monday 02 June 2025 20:09:07 +0000 (0:00:00.331) 0:00:20.238 *********** 2025-06-02 20:10:59.901629 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:10:59.901639 | orchestrator | 2025-06-02 20:10:59.901649 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 20:10:59.901718 | orchestrator | Monday 02 June 2025 20:09:08 +0000 (0:00:00.667) 0:00:20.905 *********** 2025-06-02 20:10:59.901728 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.901738 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.901748 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.901757 | orchestrator | 2025-06-02 20:10:59.901776 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 20:10:59.901786 | orchestrator | Monday 02 June 2025 20:09:08 +0000 (0:00:00.319) 0:00:21.224 *********** 2025-06-02 20:10:59.901796 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.901805 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.901815 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.901825 | orchestrator | 2025-06-02 20:10:59.901834 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 20:10:59.901844 | orchestrator | Monday 02 June 2025 20:09:09 +0000 (0:00:00.291) 0:00:21.515 *********** 2025-06-02 20:10:59.901853 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.901863 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.901873 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:10:59.901883 | orchestrator | 2025-06-02 20:10:59.901893 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 20:10:59.901902 | orchestrator | Monday 02 June 2025 20:09:09 +0000 (0:00:00.393) 0:00:21.908 *********** 2025-06-02 20:10:59.901920 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.901930 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.901940 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.901949 | orchestrator | 2025-06-02 20:10:59.901956 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 20:10:59.901964 | orchestrator | Monday 02 June 2025 20:09:09 +0000 (0:00:00.591) 0:00:22.499 *********** 2025-06-02 20:10:59.901972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:10:59.901980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:10:59.901988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:10:59.901996 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.902003 | orchestrator | 2025-06-02 20:10:59.902053 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 20:10:59.902065 | orchestrator | Monday 02 June 2025 20:09:10 +0000 (0:00:00.352) 0:00:22.852 *********** 2025-06-02 20:10:59.902073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:10:59.902081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:10:59.902089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:10:59.902097 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.902105 | orchestrator | 2025-06-02 20:10:59.902113 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 20:10:59.902121 | orchestrator | Monday 02 June 2025 20:09:10 +0000 (0:00:00.347) 0:00:23.200 *********** 2025-06-02 20:10:59.902128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:10:59.902136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:10:59.902144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:10:59.902152 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.902160 | orchestrator | 2025-06-02 20:10:59.902168 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 20:10:59.902176 | orchestrator | Monday 02 June 2025 20:09:11 +0000 (0:00:00.367) 0:00:23.567 *********** 2025-06-02 20:10:59.902184 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:10:59.902192 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:10:59.902200 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:10:59.902207 | orchestrator | 2025-06-02 20:10:59.902215 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 20:10:59.902223 | orchestrator | Monday 02 June 2025 20:09:11 +0000 (0:00:00.309) 0:00:23.877 *********** 2025-06-02 20:10:59.902231 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 20:10:59.902239 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 20:10:59.902246 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 20:10:59.902254 | orchestrator | 2025-06-02 20:10:59.902262 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 20:10:59.902270 | orchestrator | Monday 02 June 2025 20:09:11 +0000 (0:00:00.468) 0:00:24.346 *********** 2025-06-02 20:10:59.902277 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 20:10:59.902286 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:10:59.902294 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:10:59.902301 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 20:10:59.902309 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 20:10:59.902317 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 20:10:59.902325 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 20:10:59.902333 | orchestrator | 2025-06-02 20:10:59.902341 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 20:10:59.902366 | orchestrator | Monday 02 June 2025 20:09:12 +0000 (0:00:00.973) 0:00:25.319 *********** 2025-06-02 20:10:59.902387 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 20:10:59.902403 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:10:59.902417 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:10:59.902431 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 20:10:59.902444 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 20:10:59.902456 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 20:10:59.902468 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 20:10:59.902480 | orchestrator | 2025-06-02 20:10:59.902502 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-02 20:10:59.902515 | orchestrator | Monday 02 June 2025 20:09:14 +0000 (0:00:01.904) 0:00:27.223 *********** 2025-06-02 20:10:59.902528 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:10:59.902541 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:10:59.902555 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-02 20:10:59.902570 | orchestrator | 2025-06-02 20:10:59.902584 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-02 20:10:59.902599 | orchestrator | Monday 02 June 2025 20:09:15 +0000 (0:00:00.344) 0:00:27.568 *********** 2025-06-02 20:10:59.902609 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:10:59.902619 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:10:59.902627 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:10:59.902635 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:10:59.902643 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:10:59.902669 | orchestrator | 2025-06-02 20:10:59.902679 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-02 20:10:59.902688 | orchestrator | Monday 02 June 2025 20:10:02 +0000 (0:00:47.663) 0:01:15.232 *********** 2025-06-02 20:10:59.902696 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902704 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902712 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902720 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902728 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902744 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902752 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-02 20:10:59.902760 | orchestrator | 2025-06-02 20:10:59.902768 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-02 20:10:59.902775 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:24.327) 0:01:39.559 *********** 2025-06-02 20:10:59.902784 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902792 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902799 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902808 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902815 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902823 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902831 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:10:59.902839 | orchestrator | 2025-06-02 20:10:59.902852 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-02 20:10:59.902860 | orchestrator | Monday 02 June 2025 20:10:39 +0000 (0:00:12.465) 0:01:52.025 *********** 2025-06-02 20:10:59.902869 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902877 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:10:59.902884 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:10:59.902892 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902900 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:10:59.902908 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:10:59.902922 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902930 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:10:59.902938 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:10:59.902946 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902954 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:10:59.902962 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:10:59.902970 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.902978 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:10:59.902986 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:10:59.902994 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:10:59.903002 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:10:59.903010 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:10:59.903018 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-02 20:10:59.903026 | orchestrator | 2025-06-02 20:10:59.903034 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:10:59.903042 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-02 20:10:59.903051 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-02 20:10:59.903064 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 20:10:59.903072 | orchestrator | 2025-06-02 20:10:59.903080 | orchestrator | 2025-06-02 20:10:59.903088 | orchestrator | 2025-06-02 20:10:59.903096 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:10:59.903104 | orchestrator | Monday 02 June 2025 20:10:57 +0000 (0:00:18.030) 0:02:10.056 *********** 2025-06-02 20:10:59.903112 | orchestrator | =============================================================================== 2025-06-02 20:10:59.903119 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.66s 2025-06-02 20:10:59.903127 | orchestrator | generate keys ---------------------------------------------------------- 24.33s 2025-06-02 20:10:59.903135 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.03s 2025-06-02 20:10:59.903143 | orchestrator | get keys from monitors ------------------------------------------------- 12.47s 2025-06-02 20:10:59.903151 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.11s 2025-06-02 20:10:59.903159 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.90s 2025-06-02 20:10:59.903167 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.83s 2025-06-02 20:10:59.903174 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.97s 2025-06-02 20:10:59.903183 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.81s 2025-06-02 20:10:59.903190 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.74s 2025-06-02 20:10:59.903199 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.70s 2025-06-02 20:10:59.903206 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2025-06-02 20:10:59.903214 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2025-06-02 20:10:59.903222 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2025-06-02 20:10:59.903230 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-06-02 20:10:59.903238 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2025-06-02 20:10:59.903246 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.59s 2025-06-02 20:10:59.903254 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.56s 2025-06-02 20:10:59.903261 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.54s 2025-06-02 20:10:59.903274 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.54s 2025-06-02 20:10:59.903282 | orchestrator | 2025-06-02 20:10:59 | INFO  | Task 8394ea79-e79c-4b79-a488-df6f6b1ed989 is in state STARTED 2025-06-02 20:10:59.903290 | orchestrator | 2025-06-02 20:10:59 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:10:59.903298 | orchestrator | 2025-06-02 20:10:59 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:10:59.903306 | orchestrator | 2025-06-02 20:10:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:02.951027 | orchestrator | 2025-06-02 20:11:02 | INFO  | Task 8394ea79-e79c-4b79-a488-df6f6b1ed989 is in state STARTED 2025-06-02 20:11:02.952459 | orchestrator | 2025-06-02 20:11:02 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:02.954559 | orchestrator | 2025-06-02 20:11:02 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:02.954619 | orchestrator | 2025-06-02 20:11:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:05.991331 | orchestrator | 2025-06-02 20:11:05 | INFO  | Task 8394ea79-e79c-4b79-a488-df6f6b1ed989 is in state STARTED 2025-06-02 20:11:05.991467 | orchestrator | 2025-06-02 20:11:05 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:05.991486 | orchestrator | 2025-06-02 20:11:05 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:05.991498 | orchestrator | 2025-06-02 20:11:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:09.024415 | orchestrator | 2025-06-02 20:11:09 | INFO  | Task 8394ea79-e79c-4b79-a488-df6f6b1ed989 is in state STARTED 2025-06-02 20:11:09.028149 | orchestrator | 2025-06-02 20:11:09 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:09.028565 | orchestrator | 2025-06-02 20:11:09 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:09.028591 | orchestrator | 2025-06-02 20:11:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:12.064787 | orchestrator | 2025-06-02 20:11:12 | INFO  | Task 8394ea79-e79c-4b79-a488-df6f6b1ed989 is in state STARTED 2025-06-02 20:11:12.066403 | orchestrator | 2025-06-02 20:11:12 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:12.067894 | orchestrator | 2025-06-02 20:11:12 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:12.068258 | orchestrator | 2025-06-02 20:11:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:15.113577 | orchestrator | 2025-06-02 20:11:15 | INFO  | Task 8394ea79-e79c-4b79-a488-df6f6b1ed989 is in state STARTED 2025-06-02 20:11:15.115735 | orchestrator | 2025-06-02 20:11:15 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:15.117629 | orchestrator | 2025-06-02 20:11:15 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:15.117735 | orchestrator | 2025-06-02 20:11:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:18.171516 | orchestrator | 2025-06-02 20:11:18 | INFO  | Task 8394ea79-e79c-4b79-a488-df6f6b1ed989 is in state STARTED 2025-06-02 20:11:18.173737 | orchestrator | 2025-06-02 20:11:18 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:18.174752 | orchestrator | 2025-06-02 20:11:18 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:18.174970 | orchestrator | 2025-06-02 20:11:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:21.215298 | orchestrator | 2025-06-02 20:11:21 | INFO  | Task 8394ea79-e79c-4b79-a488-df6f6b1ed989 is in state STARTED 2025-06-02 20:11:21.216978 | orchestrator | 2025-06-02 20:11:21 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:21.218898 | orchestrator | 2025-06-02 20:11:21 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:21.218939 | orchestrator | 2025-06-02 20:11:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:24.256731 | orchestrator | 2025-06-02 20:11:24 | INFO  | Task 8394ea79-e79c-4b79-a488-df6f6b1ed989 is in state STARTED 2025-06-02 20:11:24.258063 | orchestrator | 2025-06-02 20:11:24 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:24.258752 | orchestrator | 2025-06-02 20:11:24 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:24.258809 | orchestrator | 2025-06-02 20:11:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:27.310316 | orchestrator | 2025-06-02 20:11:27 | INFO  | Task 8394ea79-e79c-4b79-a488-df6f6b1ed989 is in state SUCCESS 2025-06-02 20:11:27.312069 | orchestrator | 2025-06-02 20:11:27 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:27.314066 | orchestrator | 2025-06-02 20:11:27 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:27.316748 | orchestrator | 2025-06-02 20:11:27 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:27.316832 | orchestrator | 2025-06-02 20:11:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:30.364087 | orchestrator | 2025-06-02 20:11:30 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:30.367087 | orchestrator | 2025-06-02 20:11:30 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:30.368708 | orchestrator | 2025-06-02 20:11:30 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:30.368740 | orchestrator | 2025-06-02 20:11:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:33.413604 | orchestrator | 2025-06-02 20:11:33 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:33.415700 | orchestrator | 2025-06-02 20:11:33 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:33.417020 | orchestrator | 2025-06-02 20:11:33 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:33.417068 | orchestrator | 2025-06-02 20:11:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:36.453996 | orchestrator | 2025-06-02 20:11:36 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:36.456892 | orchestrator | 2025-06-02 20:11:36 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:36.456961 | orchestrator | 2025-06-02 20:11:36 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:36.456977 | orchestrator | 2025-06-02 20:11:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:39.500589 | orchestrator | 2025-06-02 20:11:39 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:39.502214 | orchestrator | 2025-06-02 20:11:39 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:39.504188 | orchestrator | 2025-06-02 20:11:39 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:39.504300 | orchestrator | 2025-06-02 20:11:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:42.542178 | orchestrator | 2025-06-02 20:11:42 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:42.543477 | orchestrator | 2025-06-02 20:11:42 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:42.546719 | orchestrator | 2025-06-02 20:11:42 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:42.546840 | orchestrator | 2025-06-02 20:11:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:45.586080 | orchestrator | 2025-06-02 20:11:45 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:45.587278 | orchestrator | 2025-06-02 20:11:45 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:45.588050 | orchestrator | 2025-06-02 20:11:45 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:45.588080 | orchestrator | 2025-06-02 20:11:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:48.626527 | orchestrator | 2025-06-02 20:11:48 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:48.627983 | orchestrator | 2025-06-02 20:11:48 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:48.629309 | orchestrator | 2025-06-02 20:11:48 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:48.629366 | orchestrator | 2025-06-02 20:11:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:51.672270 | orchestrator | 2025-06-02 20:11:51 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:51.674293 | orchestrator | 2025-06-02 20:11:51 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:51.676103 | orchestrator | 2025-06-02 20:11:51 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:51.676158 | orchestrator | 2025-06-02 20:11:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:54.716941 | orchestrator | 2025-06-02 20:11:54 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:54.719166 | orchestrator | 2025-06-02 20:11:54 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:54.721319 | orchestrator | 2025-06-02 20:11:54 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:54.721564 | orchestrator | 2025-06-02 20:11:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:57.771452 | orchestrator | 2025-06-02 20:11:57 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:11:57.773155 | orchestrator | 2025-06-02 20:11:57 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:11:57.774398 | orchestrator | 2025-06-02 20:11:57 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:11:57.774428 | orchestrator | 2025-06-02 20:11:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:00.812090 | orchestrator | 2025-06-02 20:12:00 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:00.813762 | orchestrator | 2025-06-02 20:12:00 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:12:00.815088 | orchestrator | 2025-06-02 20:12:00 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:12:00.815150 | orchestrator | 2025-06-02 20:12:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:03.852184 | orchestrator | 2025-06-02 20:12:03 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:03.854113 | orchestrator | 2025-06-02 20:12:03 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:12:03.855189 | orchestrator | 2025-06-02 20:12:03 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state STARTED 2025-06-02 20:12:03.855228 | orchestrator | 2025-06-02 20:12:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:06.899814 | orchestrator | 2025-06-02 20:12:06 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:06.901215 | orchestrator | 2025-06-02 20:12:06 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:12:06.902911 | orchestrator | 2025-06-02 20:12:06 | INFO  | Task 0b3b0acf-31c9-4e36-b621-c71e0b342c50 is in state SUCCESS 2025-06-02 20:12:06.902980 | orchestrator | 2025-06-02 20:12:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:06.904234 | orchestrator | 2025-06-02 20:12:06.904285 | orchestrator | 2025-06-02 20:12:06.904306 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-02 20:12:06.904328 | orchestrator | 2025-06-02 20:12:06.904591 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-02 20:12:06.904733 | orchestrator | Monday 02 June 2025 20:11:02 +0000 (0:00:00.171) 0:00:00.171 *********** 2025-06-02 20:12:06.904749 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-02 20:12:06.904761 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 20:12:06.904772 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 20:12:06.904783 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:12:06.904794 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 20:12:06.904805 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-02 20:12:06.904816 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-02 20:12:06.904827 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-02 20:12:06.904838 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-02 20:12:06.904849 | orchestrator | 2025-06-02 20:12:06.904860 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-02 20:12:06.904871 | orchestrator | Monday 02 June 2025 20:11:05 +0000 (0:00:03.743) 0:00:03.914 *********** 2025-06-02 20:12:06.904883 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 20:12:06.904894 | orchestrator | 2025-06-02 20:12:06.904905 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-02 20:12:06.904916 | orchestrator | Monday 02 June 2025 20:11:06 +0000 (0:00:00.862) 0:00:04.777 *********** 2025-06-02 20:12:06.904955 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-02 20:12:06.904967 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 20:12:06.904978 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 20:12:06.904989 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:12:06.905000 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 20:12:06.905011 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-02 20:12:06.905022 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-02 20:12:06.905033 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-02 20:12:06.905043 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-02 20:12:06.905055 | orchestrator | 2025-06-02 20:12:06.905066 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-02 20:12:06.905077 | orchestrator | Monday 02 June 2025 20:11:18 +0000 (0:00:11.975) 0:00:16.752 *********** 2025-06-02 20:12:06.905088 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-02 20:12:06.905099 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 20:12:06.905110 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 20:12:06.905121 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:12:06.905132 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 20:12:06.905142 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-02 20:12:06.905153 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-02 20:12:06.905164 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-02 20:12:06.905174 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-02 20:12:06.905194 | orchestrator | 2025-06-02 20:12:06.905205 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:12:06.905216 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:12:06.905231 | orchestrator | 2025-06-02 20:12:06.905243 | orchestrator | 2025-06-02 20:12:06.905255 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:12:06.905268 | orchestrator | Monday 02 June 2025 20:11:24 +0000 (0:00:06.208) 0:00:22.961 *********** 2025-06-02 20:12:06.905280 | orchestrator | =============================================================================== 2025-06-02 20:12:06.905293 | orchestrator | Write ceph keys to the share directory --------------------------------- 11.98s 2025-06-02 20:12:06.905305 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.21s 2025-06-02 20:12:06.905318 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.74s 2025-06-02 20:12:06.905330 | orchestrator | Create share directory -------------------------------------------------- 0.86s 2025-06-02 20:12:06.905343 | orchestrator | 2025-06-02 20:12:06.905356 | orchestrator | 2025-06-02 20:12:06.905368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:12:06.905381 | orchestrator | 2025-06-02 20:12:06.905406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:12:06.905420 | orchestrator | Monday 02 June 2025 20:10:21 +0000 (0:00:00.254) 0:00:00.254 *********** 2025-06-02 20:12:06.905433 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.905446 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.905460 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.905473 | orchestrator | 2025-06-02 20:12:06.905486 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:12:06.905498 | orchestrator | Monday 02 June 2025 20:10:22 +0000 (0:00:00.266) 0:00:00.521 *********** 2025-06-02 20:12:06.905511 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-02 20:12:06.905524 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-02 20:12:06.905537 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-02 20:12:06.905550 | orchestrator | 2025-06-02 20:12:06.905563 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-02 20:12:06.905576 | orchestrator | 2025-06-02 20:12:06.905589 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 20:12:06.905600 | orchestrator | Monday 02 June 2025 20:10:22 +0000 (0:00:00.405) 0:00:00.926 *********** 2025-06-02 20:12:06.905610 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:12:06.905646 | orchestrator | 2025-06-02 20:12:06.905658 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-02 20:12:06.905668 | orchestrator | Monday 02 June 2025 20:10:23 +0000 (0:00:00.525) 0:00:01.451 *********** 2025-06-02 20:12:06.905692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:12:06.905730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:12:06.905751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:12:06.905771 | orchestrator | 2025-06-02 20:12:06.905783 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-02 20:12:06.905794 | orchestrator | Monday 02 June 2025 20:10:24 +0000 (0:00:01.138) 0:00:02.590 *********** 2025-06-02 20:12:06.905805 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.905816 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.905827 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.905838 | orchestrator | 2025-06-02 20:12:06.905849 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 20:12:06.905859 | orchestrator | Monday 02 June 2025 20:10:24 +0000 (0:00:00.428) 0:00:03.019 *********** 2025-06-02 20:12:06.905871 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 20:12:06.905882 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 20:12:06.905898 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 20:12:06.905909 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 20:12:06.905920 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 20:12:06.905931 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 20:12:06.905942 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-02 20:12:06.905953 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 20:12:06.905963 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 20:12:06.905974 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 20:12:06.905985 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 20:12:06.905995 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 20:12:06.906006 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 20:12:06.906074 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 20:12:06.906089 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-02 20:12:06.906100 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 20:12:06.906111 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 20:12:06.906131 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 20:12:06.906142 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 20:12:06.906152 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 20:12:06.906163 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 20:12:06.906179 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 20:12:06.906190 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-02 20:12:06.906201 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 20:12:06.906213 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-02 20:12:06.906226 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-02 20:12:06.906237 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-02 20:12:06.906248 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-02 20:12:06.906259 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-02 20:12:06.906270 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-02 20:12:06.906281 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-02 20:12:06.906291 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-02 20:12:06.906302 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-02 20:12:06.906313 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-02 20:12:06.906324 | orchestrator | 2025-06-02 20:12:06.906335 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:12:06.906346 | orchestrator | Monday 02 June 2025 20:10:25 +0000 (0:00:00.696) 0:00:03.716 *********** 2025-06-02 20:12:06.906356 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.906367 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.906378 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.906389 | orchestrator | 2025-06-02 20:12:06.906400 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:12:06.906410 | orchestrator | Monday 02 June 2025 20:10:25 +0000 (0:00:00.274) 0:00:03.990 *********** 2025-06-02 20:12:06.906421 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.906432 | orchestrator | 2025-06-02 20:12:06.906443 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:12:06.906460 | orchestrator | Monday 02 June 2025 20:10:25 +0000 (0:00:00.092) 0:00:04.083 *********** 2025-06-02 20:12:06.906471 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.906482 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.906493 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.906504 | orchestrator | 2025-06-02 20:12:06.906515 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:12:06.906533 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.370) 0:00:04.454 *********** 2025-06-02 20:12:06.906543 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.906554 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.906565 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.906576 | orchestrator | 2025-06-02 20:12:06.906587 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:12:06.906598 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.260) 0:00:04.714 *********** 2025-06-02 20:12:06.906609 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.906648 | orchestrator | 2025-06-02 20:12:06.906659 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:12:06.906670 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.129) 0:00:04.844 *********** 2025-06-02 20:12:06.906681 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.906692 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.906703 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.906714 | orchestrator | 2025-06-02 20:12:06.906724 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:12:06.906736 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.235) 0:00:05.079 *********** 2025-06-02 20:12:06.906746 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.906757 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.906768 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.906779 | orchestrator | 2025-06-02 20:12:06.906790 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:12:06.906801 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.253) 0:00:05.332 *********** 2025-06-02 20:12:06.906811 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.906822 | orchestrator | 2025-06-02 20:12:06.906833 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:12:06.906844 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:00.245) 0:00:05.577 *********** 2025-06-02 20:12:06.906854 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.906865 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.906876 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.906887 | orchestrator | 2025-06-02 20:12:06.906903 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:12:06.906914 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:00.241) 0:00:05.819 *********** 2025-06-02 20:12:06.906925 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.906936 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.906946 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.906957 | orchestrator | 2025-06-02 20:12:06.906968 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:12:06.906979 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:00.277) 0:00:06.097 *********** 2025-06-02 20:12:06.906989 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.907000 | orchestrator | 2025-06-02 20:12:06.907011 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:12:06.907022 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:00.114) 0:00:06.211 *********** 2025-06-02 20:12:06.907032 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.907043 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.907054 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.907065 | orchestrator | 2025-06-02 20:12:06.907076 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:12:06.907087 | orchestrator | Monday 02 June 2025 20:10:28 +0000 (0:00:00.252) 0:00:06.464 *********** 2025-06-02 20:12:06.907097 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.907108 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.907119 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.907130 | orchestrator | 2025-06-02 20:12:06.907141 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:12:06.907152 | orchestrator | Monday 02 June 2025 20:10:28 +0000 (0:00:00.402) 0:00:06.866 *********** 2025-06-02 20:12:06.907170 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.907186 | orchestrator | 2025-06-02 20:12:06.907204 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:12:06.907222 | orchestrator | Monday 02 June 2025 20:10:28 +0000 (0:00:00.129) 0:00:06.996 *********** 2025-06-02 20:12:06.907239 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.907255 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.907272 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.907289 | orchestrator | 2025-06-02 20:12:06.907307 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:12:06.907323 | orchestrator | Monday 02 June 2025 20:10:28 +0000 (0:00:00.344) 0:00:07.340 *********** 2025-06-02 20:12:06.907339 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.907355 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.907371 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.907390 | orchestrator | 2025-06-02 20:12:06.907409 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:12:06.907427 | orchestrator | Monday 02 June 2025 20:10:29 +0000 (0:00:00.234) 0:00:07.575 *********** 2025-06-02 20:12:06.907446 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.907464 | orchestrator | 2025-06-02 20:12:06.907482 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:12:06.907500 | orchestrator | Monday 02 June 2025 20:10:29 +0000 (0:00:00.103) 0:00:07.679 *********** 2025-06-02 20:12:06.907517 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.907535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.907554 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.907572 | orchestrator | 2025-06-02 20:12:06.907589 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:12:06.907607 | orchestrator | Monday 02 June 2025 20:10:29 +0000 (0:00:00.342) 0:00:08.021 *********** 2025-06-02 20:12:06.907655 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.907676 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.907695 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.907714 | orchestrator | 2025-06-02 20:12:06.907746 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:12:06.907766 | orchestrator | Monday 02 June 2025 20:10:29 +0000 (0:00:00.259) 0:00:08.281 *********** 2025-06-02 20:12:06.907784 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.907802 | orchestrator | 2025-06-02 20:12:06.907822 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:12:06.907842 | orchestrator | Monday 02 June 2025 20:10:30 +0000 (0:00:00.117) 0:00:08.399 *********** 2025-06-02 20:12:06.907860 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.907880 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.907891 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.907902 | orchestrator | 2025-06-02 20:12:06.907913 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:12:06.907924 | orchestrator | Monday 02 June 2025 20:10:30 +0000 (0:00:00.267) 0:00:08.666 *********** 2025-06-02 20:12:06.907935 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.907946 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.907957 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.907968 | orchestrator | 2025-06-02 20:12:06.907979 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:12:06.907990 | orchestrator | Monday 02 June 2025 20:10:30 +0000 (0:00:00.308) 0:00:08.975 *********** 2025-06-02 20:12:06.908001 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.908012 | orchestrator | 2025-06-02 20:12:06.908023 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:12:06.908034 | orchestrator | Monday 02 June 2025 20:10:30 +0000 (0:00:00.106) 0:00:09.081 *********** 2025-06-02 20:12:06.908045 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.908056 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.908078 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.908089 | orchestrator | 2025-06-02 20:12:06.908100 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:12:06.908111 | orchestrator | Monday 02 June 2025 20:10:31 +0000 (0:00:00.459) 0:00:09.541 *********** 2025-06-02 20:12:06.908122 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.908133 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.908144 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.908155 | orchestrator | 2025-06-02 20:12:06.908166 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:12:06.908177 | orchestrator | Monday 02 June 2025 20:10:31 +0000 (0:00:00.293) 0:00:09.835 *********** 2025-06-02 20:12:06.908195 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.908206 | orchestrator | 2025-06-02 20:12:06.908217 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:12:06.908229 | orchestrator | Monday 02 June 2025 20:10:31 +0000 (0:00:00.126) 0:00:09.961 *********** 2025-06-02 20:12:06.908239 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.908250 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.908261 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.908272 | orchestrator | 2025-06-02 20:12:06.908283 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:12:06.908294 | orchestrator | Monday 02 June 2025 20:10:31 +0000 (0:00:00.271) 0:00:10.232 *********** 2025-06-02 20:12:06.908305 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:06.908316 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:06.908327 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:06.908338 | orchestrator | 2025-06-02 20:12:06.908349 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:12:06.908360 | orchestrator | Monday 02 June 2025 20:10:32 +0000 (0:00:00.506) 0:00:10.738 *********** 2025-06-02 20:12:06.908371 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.908382 | orchestrator | 2025-06-02 20:12:06.908393 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:12:06.908404 | orchestrator | Monday 02 June 2025 20:10:32 +0000 (0:00:00.133) 0:00:10.872 *********** 2025-06-02 20:12:06.908415 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.908426 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.908437 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.908448 | orchestrator | 2025-06-02 20:12:06.908459 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-02 20:12:06.908470 | orchestrator | Monday 02 June 2025 20:10:32 +0000 (0:00:00.289) 0:00:11.162 *********** 2025-06-02 20:12:06.908481 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:06.908492 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:06.908503 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:06.908514 | orchestrator | 2025-06-02 20:12:06.908525 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-02 20:12:06.908536 | orchestrator | Monday 02 June 2025 20:10:34 +0000 (0:00:01.603) 0:00:12.765 *********** 2025-06-02 20:12:06.908547 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 20:12:06.908558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 20:12:06.908569 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 20:12:06.908579 | orchestrator | 2025-06-02 20:12:06.908590 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-02 20:12:06.908601 | orchestrator | Monday 02 June 2025 20:10:36 +0000 (0:00:01.833) 0:00:14.599 *********** 2025-06-02 20:12:06.908612 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 20:12:06.908667 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 20:12:06.908692 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 20:12:06.908703 | orchestrator | 2025-06-02 20:12:06.908714 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-02 20:12:06.908725 | orchestrator | Monday 02 June 2025 20:10:38 +0000 (0:00:02.514) 0:00:17.114 *********** 2025-06-02 20:12:06.908744 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 20:12:06.908755 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 20:12:06.908767 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 20:12:06.908778 | orchestrator | 2025-06-02 20:12:06.908788 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-02 20:12:06.908799 | orchestrator | Monday 02 June 2025 20:10:40 +0000 (0:00:01.928) 0:00:19.043 *********** 2025-06-02 20:12:06.908810 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.908821 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.908831 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.908842 | orchestrator | 2025-06-02 20:12:06.908853 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-02 20:12:06.908864 | orchestrator | Monday 02 June 2025 20:10:40 +0000 (0:00:00.299) 0:00:19.342 *********** 2025-06-02 20:12:06.908874 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.908885 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.908896 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.908907 | orchestrator | 2025-06-02 20:12:06.908917 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 20:12:06.908928 | orchestrator | Monday 02 June 2025 20:10:41 +0000 (0:00:00.342) 0:00:19.685 *********** 2025-06-02 20:12:06.908939 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:12:06.908950 | orchestrator | 2025-06-02 20:12:06.908960 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-02 20:12:06.908971 | orchestrator | Monday 02 June 2025 20:10:42 +0000 (0:00:00.758) 0:00:20.443 *********** 2025-06-02 20:12:06.908991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:12:06.909023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:12:06.909042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:12:06.909062 | orchestrator | 2025-06-02 20:12:06.909073 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-02 20:12:06.909084 | orchestrator | Monday 02 June 2025 20:10:43 +0000 (0:00:01.470) 0:00:21.914 *********** 2025-06-02 20:12:06.909111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:12:06.909125 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.909137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:12:06.909161 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.909179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:12:06.909192 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.909203 | orchestrator | 2025-06-02 20:12:06.909214 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-02 20:12:06.909228 | orchestrator | Monday 02 June 2025 20:10:44 +0000 (0:00:00.691) 0:00:22.605 *********** 2025-06-02 20:12:06.909260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:12:06.909294 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.909323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:12:06.909354 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.909388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:12:06.909409 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.909429 | orchestrator | 2025-06-02 20:12:06.909449 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-02 20:12:06.909468 | orchestrator | Monday 02 June 2025 20:10:45 +0000 (0:00:01.029) 0:00:23.635 *********** 2025-06-02 20:12:06.909499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:12:06.909534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:12:06.909561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:12:06.909593 | orchestrator | 2025-06-02 20:12:06.909638 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 20:12:06.909658 | orchestrator | Monday 02 June 2025 20:10:46 +0000 (0:00:01.282) 0:00:24.918 *********** 2025-06-02 20:12:06.909677 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:06.909697 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:06.909715 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:06.909735 | orchestrator | 2025-06-02 20:12:06.909755 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 20:12:06.909773 | orchestrator | Monday 02 June 2025 20:10:46 +0000 (0:00:00.294) 0:00:25.212 *********** 2025-06-02 20:12:06.909792 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:12:06.909803 | orchestrator | 2025-06-02 20:12:06.909814 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-02 20:12:06.909825 | orchestrator | Monday 02 June 2025 20:10:47 +0000 (0:00:00.814) 0:00:26.027 *********** 2025-06-02 20:12:06.909836 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:06.909847 | orchestrator | 2025-06-02 20:12:06.909866 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-02 20:12:06.909877 | orchestrator | Monday 02 June 2025 20:10:49 +0000 (0:00:02.221) 0:00:28.249 *********** 2025-06-02 20:12:06.909888 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:06.909899 | orchestrator | 2025-06-02 20:12:06.909911 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-02 20:12:06.909921 | orchestrator | Monday 02 June 2025 20:10:51 +0000 (0:00:02.095) 0:00:30.344 *********** 2025-06-02 20:12:06.909932 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:06.909943 | orchestrator | 2025-06-02 20:12:06.909954 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 20:12:06.909964 | orchestrator | Monday 02 June 2025 20:11:06 +0000 (0:00:14.056) 0:00:44.400 *********** 2025-06-02 20:12:06.909975 | orchestrator | 2025-06-02 20:12:06.909986 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 20:12:06.909997 | orchestrator | Monday 02 June 2025 20:11:06 +0000 (0:00:00.060) 0:00:44.460 *********** 2025-06-02 20:12:06.910007 | orchestrator | 2025-06-02 20:12:06.910053 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 20:12:06.910067 | orchestrator | Monday 02 June 2025 20:11:06 +0000 (0:00:00.059) 0:00:44.519 *********** 2025-06-02 20:12:06.910078 | orchestrator | 2025-06-02 20:12:06.910089 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-02 20:12:06.910099 | orchestrator | Monday 02 June 2025 20:11:06 +0000 (0:00:00.060) 0:00:44.580 *********** 2025-06-02 20:12:06.910110 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:06.910121 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:06.910132 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:06.910142 | orchestrator | 2025-06-02 20:12:06.910153 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:12:06.910179 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-02 20:12:06.910191 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 20:12:06.910209 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 20:12:06.910220 | orchestrator | 2025-06-02 20:12:06.910231 | orchestrator | 2025-06-02 20:12:06.910242 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:12:06.910253 | orchestrator | Monday 02 June 2025 20:12:05 +0000 (0:00:59.138) 0:01:43.719 *********** 2025-06-02 20:12:06.910264 | orchestrator | =============================================================================== 2025-06-02 20:12:06.910275 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.14s 2025-06-02 20:12:06.910285 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.06s 2025-06-02 20:12:06.910296 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.51s 2025-06-02 20:12:06.910307 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.22s 2025-06-02 20:12:06.910318 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.10s 2025-06-02 20:12:06.910328 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.93s 2025-06-02 20:12:06.910339 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.83s 2025-06-02 20:12:06.910350 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.60s 2025-06-02 20:12:06.910361 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.47s 2025-06-02 20:12:06.910371 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.28s 2025-06-02 20:12:06.910382 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.14s 2025-06-02 20:12:06.910393 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.03s 2025-06-02 20:12:06.910404 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2025-06-02 20:12:06.910414 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2025-06-02 20:12:06.910425 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-06-02 20:12:06.910436 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.69s 2025-06-02 20:12:06.910447 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-06-02 20:12:06.910458 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-06-02 20:12:06.910469 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.46s 2025-06-02 20:12:06.910479 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.43s 2025-06-02 20:12:09.957462 | orchestrator | 2025-06-02 20:12:09 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:09.960921 | orchestrator | 2025-06-02 20:12:09 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:12:09.960998 | orchestrator | 2025-06-02 20:12:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:13.016280 | orchestrator | 2025-06-02 20:12:13 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:13.018342 | orchestrator | 2025-06-02 20:12:13 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:12:13.019130 | orchestrator | 2025-06-02 20:12:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:16.069900 | orchestrator | 2025-06-02 20:12:16 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:16.072194 | orchestrator | 2025-06-02 20:12:16 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state STARTED 2025-06-02 20:12:16.072482 | orchestrator | 2025-06-02 20:12:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:19.128003 | orchestrator | 2025-06-02 20:12:19 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:19.128088 | orchestrator | 2025-06-02 20:12:19 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:19.130155 | orchestrator | 2025-06-02 20:12:19 | INFO  | Task 398bd24e-4e69-4a57-a792-dceaac8c8b9d is in state STARTED 2025-06-02 20:12:19.131110 | orchestrator | 2025-06-02 20:12:19 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:19.133473 | orchestrator | 2025-06-02 20:12:19 | INFO  | Task 0c1a04ab-f32a-4829-aecb-4795f9ac3e18 is in state SUCCESS 2025-06-02 20:12:19.133502 | orchestrator | 2025-06-02 20:12:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:22.186096 | orchestrator | 2025-06-02 20:12:22 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:22.187274 | orchestrator | 2025-06-02 20:12:22 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:22.187685 | orchestrator | 2025-06-02 20:12:22 | INFO  | Task 398bd24e-4e69-4a57-a792-dceaac8c8b9d is in state STARTED 2025-06-02 20:12:22.188485 | orchestrator | 2025-06-02 20:12:22 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:22.188537 | orchestrator | 2025-06-02 20:12:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:25.220996 | orchestrator | 2025-06-02 20:12:25 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:25.221089 | orchestrator | 2025-06-02 20:12:25 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:25.221405 | orchestrator | 2025-06-02 20:12:25 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:25.222675 | orchestrator | 2025-06-02 20:12:25 | INFO  | Task 398bd24e-4e69-4a57-a792-dceaac8c8b9d is in state SUCCESS 2025-06-02 20:12:25.223511 | orchestrator | 2025-06-02 20:12:25 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:25.224148 | orchestrator | 2025-06-02 20:12:25 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:25.224167 | orchestrator | 2025-06-02 20:12:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:28.265050 | orchestrator | 2025-06-02 20:12:28 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:28.265369 | orchestrator | 2025-06-02 20:12:28 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:28.266262 | orchestrator | 2025-06-02 20:12:28 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:28.268009 | orchestrator | 2025-06-02 20:12:28 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:28.269100 | orchestrator | 2025-06-02 20:12:28 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:28.269893 | orchestrator | 2025-06-02 20:12:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:31.322486 | orchestrator | 2025-06-02 20:12:31 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:31.330894 | orchestrator | 2025-06-02 20:12:31 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:31.331001 | orchestrator | 2025-06-02 20:12:31 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:31.331012 | orchestrator | 2025-06-02 20:12:31 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:31.331019 | orchestrator | 2025-06-02 20:12:31 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:31.331026 | orchestrator | 2025-06-02 20:12:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:34.368409 | orchestrator | 2025-06-02 20:12:34 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:34.371172 | orchestrator | 2025-06-02 20:12:34 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:34.372864 | orchestrator | 2025-06-02 20:12:34 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:34.373464 | orchestrator | 2025-06-02 20:12:34 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:34.374557 | orchestrator | 2025-06-02 20:12:34 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:34.374587 | orchestrator | 2025-06-02 20:12:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:37.426692 | orchestrator | 2025-06-02 20:12:37 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:37.428233 | orchestrator | 2025-06-02 20:12:37 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:37.431494 | orchestrator | 2025-06-02 20:12:37 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:37.433482 | orchestrator | 2025-06-02 20:12:37 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:37.435824 | orchestrator | 2025-06-02 20:12:37 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:37.435883 | orchestrator | 2025-06-02 20:12:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:40.465165 | orchestrator | 2025-06-02 20:12:40 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:40.468185 | orchestrator | 2025-06-02 20:12:40 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:40.468287 | orchestrator | 2025-06-02 20:12:40 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:40.468311 | orchestrator | 2025-06-02 20:12:40 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:40.468999 | orchestrator | 2025-06-02 20:12:40 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:40.469056 | orchestrator | 2025-06-02 20:12:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:43.512098 | orchestrator | 2025-06-02 20:12:43 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:43.513355 | orchestrator | 2025-06-02 20:12:43 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:43.514943 | orchestrator | 2025-06-02 20:12:43 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:43.516387 | orchestrator | 2025-06-02 20:12:43 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:43.517585 | orchestrator | 2025-06-02 20:12:43 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:43.517731 | orchestrator | 2025-06-02 20:12:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:46.547412 | orchestrator | 2025-06-02 20:12:46 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:46.547501 | orchestrator | 2025-06-02 20:12:46 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:46.550782 | orchestrator | 2025-06-02 20:12:46 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:46.551030 | orchestrator | 2025-06-02 20:12:46 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:46.553736 | orchestrator | 2025-06-02 20:12:46 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:46.553783 | orchestrator | 2025-06-02 20:12:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:49.585831 | orchestrator | 2025-06-02 20:12:49 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:49.585933 | orchestrator | 2025-06-02 20:12:49 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:49.596238 | orchestrator | 2025-06-02 20:12:49 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:49.596314 | orchestrator | 2025-06-02 20:12:49 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:49.596320 | orchestrator | 2025-06-02 20:12:49 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:49.596327 | orchestrator | 2025-06-02 20:12:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:52.619102 | orchestrator | 2025-06-02 20:12:52 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:52.619206 | orchestrator | 2025-06-02 20:12:52 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:52.620205 | orchestrator | 2025-06-02 20:12:52 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:52.621749 | orchestrator | 2025-06-02 20:12:52 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:52.622426 | orchestrator | 2025-06-02 20:12:52 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:52.622513 | orchestrator | 2025-06-02 20:12:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:55.649659 | orchestrator | 2025-06-02 20:12:55 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:55.651329 | orchestrator | 2025-06-02 20:12:55 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:55.651721 | orchestrator | 2025-06-02 20:12:55 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:55.653026 | orchestrator | 2025-06-02 20:12:55 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:55.654416 | orchestrator | 2025-06-02 20:12:55 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state STARTED 2025-06-02 20:12:55.654446 | orchestrator | 2025-06-02 20:12:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:58.706918 | orchestrator | 2025-06-02 20:12:58 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:12:58.707972 | orchestrator | 2025-06-02 20:12:58 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:12:58.710700 | orchestrator | 2025-06-02 20:12:58 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:12:58.712852 | orchestrator | 2025-06-02 20:12:58 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:12:58.714949 | orchestrator | 2025-06-02 20:12:58 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:12:58.718264 | orchestrator | 2025-06-02 20:12:58 | INFO  | Task 203592fb-d06c-4f18-bb6f-430fa69aaa86 is in state SUCCESS 2025-06-02 20:12:58.718618 | orchestrator | 2025-06-02 20:12:58.718639 | orchestrator | 2025-06-02 20:12:58.718699 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-02 20:12:58.718711 | orchestrator | 2025-06-02 20:12:58.718725 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-02 20:12:58.718740 | orchestrator | Monday 02 June 2025 20:11:28 +0000 (0:00:00.210) 0:00:00.210 *********** 2025-06-02 20:12:58.718751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-02 20:12:58.718763 | orchestrator | 2025-06-02 20:12:58.718774 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-02 20:12:58.718784 | orchestrator | Monday 02 June 2025 20:11:28 +0000 (0:00:00.199) 0:00:00.409 *********** 2025-06-02 20:12:58.718795 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-02 20:12:58.718806 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-02 20:12:58.718818 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-02 20:12:58.718829 | orchestrator | 2025-06-02 20:12:58.718840 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-02 20:12:58.718851 | orchestrator | Monday 02 June 2025 20:11:30 +0000 (0:00:01.077) 0:00:01.486 *********** 2025-06-02 20:12:58.718863 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-02 20:12:58.718873 | orchestrator | 2025-06-02 20:12:58.718884 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-02 20:12:58.718894 | orchestrator | Monday 02 June 2025 20:11:31 +0000 (0:00:01.033) 0:00:02.519 *********** 2025-06-02 20:12:58.718906 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:58.718917 | orchestrator | 2025-06-02 20:12:58.718928 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-02 20:12:58.718938 | orchestrator | Monday 02 June 2025 20:11:31 +0000 (0:00:00.901) 0:00:03.421 *********** 2025-06-02 20:12:58.718948 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:58.718959 | orchestrator | 2025-06-02 20:12:58.718969 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-02 20:12:58.718980 | orchestrator | Monday 02 June 2025 20:11:32 +0000 (0:00:00.829) 0:00:04.250 *********** 2025-06-02 20:12:58.718990 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-02 20:12:58.719001 | orchestrator | ok: [testbed-manager] 2025-06-02 20:12:58.719011 | orchestrator | 2025-06-02 20:12:58.719054 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-02 20:12:58.719066 | orchestrator | Monday 02 June 2025 20:12:07 +0000 (0:00:35.131) 0:00:39.382 *********** 2025-06-02 20:12:58.719076 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-02 20:12:58.719087 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-02 20:12:58.719097 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-02 20:12:58.719107 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-02 20:12:58.719117 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-02 20:12:58.719127 | orchestrator | 2025-06-02 20:12:58.719137 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-02 20:12:58.719147 | orchestrator | Monday 02 June 2025 20:12:11 +0000 (0:00:03.912) 0:00:43.294 *********** 2025-06-02 20:12:58.719232 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-02 20:12:58.719240 | orchestrator | 2025-06-02 20:12:58.719246 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-02 20:12:58.719271 | orchestrator | Monday 02 June 2025 20:12:12 +0000 (0:00:00.429) 0:00:43.723 *********** 2025-06-02 20:12:58.719278 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:12:58.719284 | orchestrator | 2025-06-02 20:12:58.719290 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-02 20:12:58.719297 | orchestrator | Monday 02 June 2025 20:12:12 +0000 (0:00:00.128) 0:00:43.852 *********** 2025-06-02 20:12:58.719303 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:12:58.719309 | orchestrator | 2025-06-02 20:12:58.719315 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-02 20:12:58.719406 | orchestrator | Monday 02 June 2025 20:12:12 +0000 (0:00:00.287) 0:00:44.140 *********** 2025-06-02 20:12:58.719417 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:58.719423 | orchestrator | 2025-06-02 20:12:58.719430 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-02 20:12:58.719436 | orchestrator | Monday 02 June 2025 20:12:14 +0000 (0:00:01.666) 0:00:45.806 *********** 2025-06-02 20:12:58.719442 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:58.719448 | orchestrator | 2025-06-02 20:12:58.719454 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-02 20:12:58.719460 | orchestrator | Monday 02 June 2025 20:12:15 +0000 (0:00:00.703) 0:00:46.509 *********** 2025-06-02 20:12:58.719466 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:58.719473 | orchestrator | 2025-06-02 20:12:58.719479 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-02 20:12:58.719485 | orchestrator | Monday 02 June 2025 20:12:15 +0000 (0:00:00.555) 0:00:47.065 *********** 2025-06-02 20:12:58.719491 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-02 20:12:58.719497 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-02 20:12:58.719513 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-02 20:12:58.719520 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-02 20:12:58.719526 | orchestrator | 2025-06-02 20:12:58.719532 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:12:58.719539 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:12:58.719601 | orchestrator | 2025-06-02 20:12:58.719614 | orchestrator | 2025-06-02 20:12:58.719633 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:12:58.719640 | orchestrator | Monday 02 June 2025 20:12:17 +0000 (0:00:01.446) 0:00:48.511 *********** 2025-06-02 20:12:58.719647 | orchestrator | =============================================================================== 2025-06-02 20:12:58.719857 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.13s 2025-06-02 20:12:58.719866 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.91s 2025-06-02 20:12:58.719872 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.67s 2025-06-02 20:12:58.719878 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.45s 2025-06-02 20:12:58.719885 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.08s 2025-06-02 20:12:58.719891 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.03s 2025-06-02 20:12:58.719992 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.90s 2025-06-02 20:12:58.720006 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.83s 2025-06-02 20:12:58.720016 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.70s 2025-06-02 20:12:58.720026 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.56s 2025-06-02 20:12:58.720035 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2025-06-02 20:12:58.720046 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-06-02 20:12:58.720056 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-06-02 20:12:58.720077 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-06-02 20:12:58.720088 | orchestrator | 2025-06-02 20:12:58.720098 | orchestrator | 2025-06-02 20:12:58.720108 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:12:58.720118 | orchestrator | 2025-06-02 20:12:58.720189 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:12:58.720195 | orchestrator | Monday 02 June 2025 20:12:20 +0000 (0:00:00.163) 0:00:00.163 *********** 2025-06-02 20:12:58.720201 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:58.720208 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:58.720214 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:58.720220 | orchestrator | 2025-06-02 20:12:58.720260 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:12:58.720272 | orchestrator | Monday 02 June 2025 20:12:21 +0000 (0:00:00.251) 0:00:00.415 *********** 2025-06-02 20:12:58.720281 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 20:12:58.720291 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 20:12:58.720300 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 20:12:58.720310 | orchestrator | 2025-06-02 20:12:58.720553 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-02 20:12:58.720574 | orchestrator | 2025-06-02 20:12:58.720659 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-02 20:12:58.720669 | orchestrator | Monday 02 June 2025 20:12:21 +0000 (0:00:00.559) 0:00:00.975 *********** 2025-06-02 20:12:58.720678 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:58.720688 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:58.720698 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:58.720709 | orchestrator | 2025-06-02 20:12:58.720721 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:12:58.720733 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:12:58.720745 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:12:58.720755 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:12:58.720765 | orchestrator | 2025-06-02 20:12:58.720775 | orchestrator | 2025-06-02 20:12:58.720785 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:12:58.720794 | orchestrator | Monday 02 June 2025 20:12:22 +0000 (0:00:00.643) 0:00:01.619 *********** 2025-06-02 20:12:58.720802 | orchestrator | =============================================================================== 2025-06-02 20:12:58.720810 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.64s 2025-06-02 20:12:58.720819 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-06-02 20:12:58.720828 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-06-02 20:12:58.720836 | orchestrator | 2025-06-02 20:12:58.720845 | orchestrator | 2025-06-02 20:12:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:58.720887 | orchestrator | 2025-06-02 20:12:58.720899 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:12:58.720908 | orchestrator | 2025-06-02 20:12:58.720918 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:12:58.720928 | orchestrator | Monday 02 June 2025 20:10:21 +0000 (0:00:00.253) 0:00:00.253 *********** 2025-06-02 20:12:58.720937 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:58.720964 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:58.720974 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:58.720983 | orchestrator | 2025-06-02 20:12:58.720993 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:12:58.721013 | orchestrator | Monday 02 June 2025 20:10:22 +0000 (0:00:00.267) 0:00:00.521 *********** 2025-06-02 20:12:58.721023 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 20:12:58.721032 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 20:12:58.721041 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 20:12:58.721051 | orchestrator | 2025-06-02 20:12:58.721061 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-02 20:12:58.721071 | orchestrator | 2025-06-02 20:12:58.721080 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:12:58.721089 | orchestrator | Monday 02 June 2025 20:10:22 +0000 (0:00:00.411) 0:00:00.932 *********** 2025-06-02 20:12:58.721098 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:12:58.721107 | orchestrator | 2025-06-02 20:12:58.721116 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-02 20:12:58.721126 | orchestrator | Monday 02 June 2025 20:10:23 +0000 (0:00:00.560) 0:00:01.492 *********** 2025-06-02 20:12:58.721141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.721155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.721199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.721228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721295 | orchestrator | 2025-06-02 20:12:58.721305 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-02 20:12:58.721323 | orchestrator | Monday 02 June 2025 20:10:24 +0000 (0:00:01.682) 0:00:03.175 *********** 2025-06-02 20:12:58.721341 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-02 20:12:58.721352 | orchestrator | 2025-06-02 20:12:58.721362 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-02 20:12:58.721373 | orchestrator | Monday 02 June 2025 20:10:25 +0000 (0:00:00.839) 0:00:04.015 *********** 2025-06-02 20:12:58.721383 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:58.721394 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:58.721405 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:58.721414 | orchestrator | 2025-06-02 20:12:58.721429 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-02 20:12:58.721439 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.356) 0:00:04.371 *********** 2025-06-02 20:12:58.721449 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:12:58.721459 | orchestrator | 2025-06-02 20:12:58.721469 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:12:58.721478 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.589) 0:00:04.960 *********** 2025-06-02 20:12:58.721487 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:12:58.721496 | orchestrator | 2025-06-02 20:12:58.721505 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-02 20:12:58.721514 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:00.458) 0:00:05.419 *********** 2025-06-02 20:12:58.721524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.721535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.721546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.721645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.721723 | orchestrator | 2025-06-02 20:12:58.721733 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-02 20:12:58.721743 | orchestrator | Monday 02 June 2025 20:10:30 +0000 (0:00:03.359) 0:00:08.778 *********** 2025-06-02 20:12:58.721769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:12:58.721780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.721790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:12:58.721801 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.721811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:12:58.721829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.721846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:12:58.721857 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:58.721873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:12:58.721884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.721894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:12:58.721905 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:58.721914 | orchestrator | 2025-06-02 20:12:58.721924 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-02 20:12:58.721934 | orchestrator | Monday 02 June 2025 20:10:30 +0000 (0:00:00.497) 0:00:09.276 *********** 2025-06-02 20:12:58.721952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:12:58.721969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.721984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:12:58.721995 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.722005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:12:58.722060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.722081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:12:58.722092 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:58.722103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:12:58.722132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.722144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:12:58.722155 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:58.722166 | orchestrator | 2025-06-02 20:12:58.722176 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-02 20:12:58.722187 | orchestrator | Monday 02 June 2025 20:10:31 +0000 (0:00:00.798) 0:00:10.074 *********** 2025-06-02 20:12:58.722199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.722219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.722238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.722256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722328 | orchestrator | 2025-06-02 20:12:58.722338 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-02 20:12:58.722348 | orchestrator | Monday 02 June 2025 20:10:35 +0000 (0:00:03.797) 0:00:13.872 *********** 2025-06-02 20:12:58.722374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.722387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.722399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.722417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.722436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.722453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.722465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722505 | orchestrator | 2025-06-02 20:12:58.722515 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-02 20:12:58.722526 | orchestrator | Monday 02 June 2025 20:10:40 +0000 (0:00:05.359) 0:00:19.232 *********** 2025-06-02 20:12:58.722536 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:58.722546 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:58.722557 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:58.722567 | orchestrator | 2025-06-02 20:12:58.722598 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-02 20:12:58.722609 | orchestrator | Monday 02 June 2025 20:10:42 +0000 (0:00:01.520) 0:00:20.752 *********** 2025-06-02 20:12:58.722619 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.722629 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:58.722638 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:58.722648 | orchestrator | 2025-06-02 20:12:58.722658 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-02 20:12:58.722667 | orchestrator | Monday 02 June 2025 20:10:42 +0000 (0:00:00.541) 0:00:21.294 *********** 2025-06-02 20:12:58.722674 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.722680 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:58.722686 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:58.722692 | orchestrator | 2025-06-02 20:12:58.722699 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-02 20:12:58.722705 | orchestrator | Monday 02 June 2025 20:10:43 +0000 (0:00:00.462) 0:00:21.757 *********** 2025-06-02 20:12:58.722711 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.722717 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:58.722723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:58.722730 | orchestrator | 2025-06-02 20:12:58.722736 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-02 20:12:58.722742 | orchestrator | Monday 02 June 2025 20:10:43 +0000 (0:00:00.309) 0:00:22.066 *********** 2025-06-02 20:12:58.722761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.722775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.722782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.722789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.722795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.722810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:12:58.722821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.722840 | orchestrator | 2025-06-02 20:12:58.722847 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:12:58.722853 | orchestrator | Monday 02 June 2025 20:10:46 +0000 (0:00:02.549) 0:00:24.616 *********** 2025-06-02 20:12:58.722859 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.722865 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:58.722871 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:58.722878 | orchestrator | 2025-06-02 20:12:58.722884 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-02 20:12:58.722890 | orchestrator | Monday 02 June 2025 20:10:46 +0000 (0:00:00.284) 0:00:24.900 *********** 2025-06-02 20:12:58.722896 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 20:12:58.722903 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 20:12:58.722914 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 20:12:58.722924 | orchestrator | 2025-06-02 20:12:58.722933 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-02 20:12:58.722943 | orchestrator | Monday 02 June 2025 20:10:48 +0000 (0:00:02.035) 0:00:26.936 *********** 2025-06-02 20:12:58.722954 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:12:58.722963 | orchestrator | 2025-06-02 20:12:58.722973 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-02 20:12:58.722983 | orchestrator | Monday 02 June 2025 20:10:49 +0000 (0:00:00.910) 0:00:27.846 *********** 2025-06-02 20:12:58.722993 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.723002 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:58.723013 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:58.723023 | orchestrator | 2025-06-02 20:12:58.723034 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-02 20:12:58.723044 | orchestrator | Monday 02 June 2025 20:10:50 +0000 (0:00:00.506) 0:00:28.353 *********** 2025-06-02 20:12:58.723053 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:12:58.723065 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 20:12:58.723071 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 20:12:58.723077 | orchestrator | 2025-06-02 20:12:58.723083 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-02 20:12:58.723090 | orchestrator | Monday 02 June 2025 20:10:51 +0000 (0:00:01.080) 0:00:29.434 *********** 2025-06-02 20:12:58.723101 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:58.723108 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:58.723114 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:58.723120 | orchestrator | 2025-06-02 20:12:58.723126 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-02 20:12:58.723132 | orchestrator | Monday 02 June 2025 20:10:51 +0000 (0:00:00.262) 0:00:29.697 *********** 2025-06-02 20:12:58.723138 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 20:12:58.723145 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 20:12:58.723151 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 20:12:58.723157 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 20:12:58.723163 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 20:12:58.723170 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 20:12:58.723176 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 20:12:58.723182 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 20:12:58.723188 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 20:12:58.723194 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 20:12:58.723200 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 20:12:58.723206 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 20:12:58.723213 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 20:12:58.723219 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 20:12:58.723225 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 20:12:58.723231 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:12:58.723237 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:12:58.723243 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:12:58.723249 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:12:58.723255 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:12:58.723292 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:12:58.723299 | orchestrator | 2025-06-02 20:12:58.723305 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-02 20:12:58.723311 | orchestrator | Monday 02 June 2025 20:11:00 +0000 (0:00:08.801) 0:00:38.499 *********** 2025-06-02 20:12:58.723318 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:12:58.723324 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:12:58.723334 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:12:58.723341 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:12:58.723347 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:12:58.723353 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:12:58.723359 | orchestrator | 2025-06-02 20:12:58.723365 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-02 20:12:58.723371 | orchestrator | Monday 02 June 2025 20:11:02 +0000 (0:00:02.392) 0:00:40.892 *********** 2025-06-02 20:12:58.723382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.723396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.723404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:12:58.723411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.723423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.723429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:12:58.723443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.723450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.723457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:12:58.723463 | orchestrator | 2025-06-02 20:12:58.723470 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:12:58.723476 | orchestrator | Monday 02 June 2025 20:11:04 +0000 (0:00:02.140) 0:00:43.032 *********** 2025-06-02 20:12:58.723482 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.723488 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:58.723494 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:58.723500 | orchestrator | 2025-06-02 20:12:58.723511 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-02 20:12:58.723517 | orchestrator | Monday 02 June 2025 20:11:04 +0000 (0:00:00.246) 0:00:43.279 *********** 2025-06-02 20:12:58.723523 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:58.723529 | orchestrator | 2025-06-02 20:12:58.723535 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-02 20:12:58.723541 | orchestrator | Monday 02 June 2025 20:11:06 +0000 (0:00:01.896) 0:00:45.175 *********** 2025-06-02 20:12:58.723548 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:58.723554 | orchestrator | 2025-06-02 20:12:58.723560 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-02 20:12:58.723566 | orchestrator | Monday 02 June 2025 20:11:09 +0000 (0:00:02.705) 0:00:47.880 *********** 2025-06-02 20:12:58.723572 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:58.723600 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:58.723607 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:58.723613 | orchestrator | 2025-06-02 20:12:58.723619 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-02 20:12:58.723627 | orchestrator | Monday 02 June 2025 20:11:10 +0000 (0:00:00.917) 0:00:48.798 *********** 2025-06-02 20:12:58.723637 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:58.723648 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:58.723658 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:58.723668 | orchestrator | 2025-06-02 20:12:58.723678 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-02 20:12:58.723688 | orchestrator | Monday 02 June 2025 20:11:10 +0000 (0:00:00.379) 0:00:49.178 *********** 2025-06-02 20:12:58.723698 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.723708 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:58.723719 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:58.723729 | orchestrator | 2025-06-02 20:12:58.723740 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-02 20:12:58.723751 | orchestrator | Monday 02 June 2025 20:11:11 +0000 (0:00:00.369) 0:00:49.547 *********** 2025-06-02 20:12:58.723761 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:58.723771 | orchestrator | 2025-06-02 20:12:58.723782 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-02 20:12:58.723788 | orchestrator | Monday 02 June 2025 20:11:25 +0000 (0:00:14.374) 0:01:03.922 *********** 2025-06-02 20:12:58.723795 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:58.723802 | orchestrator | 2025-06-02 20:12:58.723808 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 20:12:58.723814 | orchestrator | Monday 02 June 2025 20:11:35 +0000 (0:00:09.836) 0:01:13.758 *********** 2025-06-02 20:12:58.723820 | orchestrator | 2025-06-02 20:12:58.723827 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 20:12:58.723833 | orchestrator | Monday 02 June 2025 20:11:35 +0000 (0:00:00.188) 0:01:13.947 *********** 2025-06-02 20:12:58.723839 | orchestrator | 2025-06-02 20:12:58.723846 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 20:12:58.723852 | orchestrator | Monday 02 June 2025 20:11:35 +0000 (0:00:00.058) 0:01:14.005 *********** 2025-06-02 20:12:58.723858 | orchestrator | 2025-06-02 20:12:58.723870 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-02 20:12:58.723877 | orchestrator | Monday 02 June 2025 20:11:35 +0000 (0:00:00.058) 0:01:14.064 *********** 2025-06-02 20:12:58.723883 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:58.723889 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:58.723895 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:58.723902 | orchestrator | 2025-06-02 20:12:58.723917 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-02 20:12:58.723927 | orchestrator | Monday 02 June 2025 20:11:54 +0000 (0:00:19.008) 0:01:33.073 *********** 2025-06-02 20:12:58.723937 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:58.723948 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:58.723966 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:58.723977 | orchestrator | 2025-06-02 20:12:58.723988 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-02 20:12:58.723996 | orchestrator | Monday 02 June 2025 20:12:02 +0000 (0:00:07.432) 0:01:40.505 *********** 2025-06-02 20:12:58.724002 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:58.724008 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:58.724015 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:58.724021 | orchestrator | 2025-06-02 20:12:58.724027 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:12:58.724034 | orchestrator | Monday 02 June 2025 20:12:08 +0000 (0:00:05.953) 0:01:46.458 *********** 2025-06-02 20:12:58.724040 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:12:58.724047 | orchestrator | 2025-06-02 20:12:58.724054 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-02 20:12:58.724060 | orchestrator | Monday 02 June 2025 20:12:08 +0000 (0:00:00.757) 0:01:47.216 *********** 2025-06-02 20:12:58.724066 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:58.724072 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:58.724079 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:58.724085 | orchestrator | 2025-06-02 20:12:58.724092 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-02 20:12:58.724098 | orchestrator | Monday 02 June 2025 20:12:09 +0000 (0:00:00.751) 0:01:47.967 *********** 2025-06-02 20:12:58.724104 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:58.724110 | orchestrator | 2025-06-02 20:12:58.724117 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-02 20:12:58.724123 | orchestrator | Monday 02 June 2025 20:12:11 +0000 (0:00:01.845) 0:01:49.812 *********** 2025-06-02 20:12:58.724129 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-02 20:12:58.724136 | orchestrator | 2025-06-02 20:12:58.724142 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-02 20:12:58.724148 | orchestrator | Monday 02 June 2025 20:12:22 +0000 (0:00:11.001) 0:02:00.814 *********** 2025-06-02 20:12:58.724155 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-02 20:12:58.724161 | orchestrator | 2025-06-02 20:12:58.724168 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-02 20:12:58.724174 | orchestrator | Monday 02 June 2025 20:12:44 +0000 (0:00:22.030) 0:02:22.844 *********** 2025-06-02 20:12:58.724180 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-02 20:12:58.724186 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-02 20:12:58.724193 | orchestrator | 2025-06-02 20:12:58.724199 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-02 20:12:58.724205 | orchestrator | Monday 02 June 2025 20:12:51 +0000 (0:00:06.564) 0:02:29.408 *********** 2025-06-02 20:12:58.724212 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.724218 | orchestrator | 2025-06-02 20:12:58.724224 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-02 20:12:58.724230 | orchestrator | Monday 02 June 2025 20:12:51 +0000 (0:00:00.555) 0:02:29.964 *********** 2025-06-02 20:12:58.724236 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.724243 | orchestrator | 2025-06-02 20:12:58.724249 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-02 20:12:58.724255 | orchestrator | Monday 02 June 2025 20:12:51 +0000 (0:00:00.202) 0:02:30.167 *********** 2025-06-02 20:12:58.724261 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.724268 | orchestrator | 2025-06-02 20:12:58.724274 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-02 20:12:58.724280 | orchestrator | Monday 02 June 2025 20:12:52 +0000 (0:00:00.263) 0:02:30.430 *********** 2025-06-02 20:12:58.724291 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.724297 | orchestrator | 2025-06-02 20:12:58.724304 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-02 20:12:58.724310 | orchestrator | Monday 02 June 2025 20:12:52 +0000 (0:00:00.431) 0:02:30.862 *********** 2025-06-02 20:12:58.724316 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:58.724323 | orchestrator | 2025-06-02 20:12:58.724329 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:12:58.724336 | orchestrator | Monday 02 June 2025 20:12:55 +0000 (0:00:03.263) 0:02:34.125 *********** 2025-06-02 20:12:58.724342 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:58.724348 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:58.724355 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:58.724361 | orchestrator | 2025-06-02 20:12:58.724367 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:12:58.724373 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-02 20:12:58.724381 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 20:12:58.724392 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 20:12:58.724398 | orchestrator | 2025-06-02 20:12:58.724404 | orchestrator | 2025-06-02 20:12:58.724411 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:12:58.724417 | orchestrator | Monday 02 June 2025 20:12:56 +0000 (0:00:00.421) 0:02:34.547 *********** 2025-06-02 20:12:58.724427 | orchestrator | =============================================================================== 2025-06-02 20:12:58.724434 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.03s 2025-06-02 20:12:58.724440 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.01s 2025-06-02 20:12:58.724447 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.37s 2025-06-02 20:12:58.724453 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.00s 2025-06-02 20:12:58.724459 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.84s 2025-06-02 20:12:58.724465 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.80s 2025-06-02 20:12:58.724472 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.43s 2025-06-02 20:12:58.724478 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.56s 2025-06-02 20:12:58.724484 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.95s 2025-06-02 20:12:58.724490 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.36s 2025-06-02 20:12:58.724497 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.80s 2025-06-02 20:12:58.724505 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.36s 2025-06-02 20:12:58.724516 | orchestrator | keystone : Creating default user role ----------------------------------- 3.26s 2025-06-02 20:12:58.724525 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.71s 2025-06-02 20:12:58.724536 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.55s 2025-06-02 20:12:58.724545 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.39s 2025-06-02 20:12:58.724555 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.14s 2025-06-02 20:12:58.724565 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.04s 2025-06-02 20:12:58.724598 | orchestrator | keystone : Creating keystone database ----------------------------------- 1.90s 2025-06-02 20:12:58.724612 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.85s 2025-06-02 20:13:01.758249 | orchestrator | 2025-06-02 20:13:01 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:01.760330 | orchestrator | 2025-06-02 20:13:01 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:01.761435 | orchestrator | 2025-06-02 20:13:01 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:01.762774 | orchestrator | 2025-06-02 20:13:01 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:13:01.763235 | orchestrator | 2025-06-02 20:13:01 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:01.763390 | orchestrator | 2025-06-02 20:13:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:04.803437 | orchestrator | 2025-06-02 20:13:04 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:04.803543 | orchestrator | 2025-06-02 20:13:04 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:04.805129 | orchestrator | 2025-06-02 20:13:04 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:04.805772 | orchestrator | 2025-06-02 20:13:04 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:13:04.806916 | orchestrator | 2025-06-02 20:13:04 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:04.806943 | orchestrator | 2025-06-02 20:13:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:07.849257 | orchestrator | 2025-06-02 20:13:07 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:07.850604 | orchestrator | 2025-06-02 20:13:07 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:07.851836 | orchestrator | 2025-06-02 20:13:07 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:07.853139 | orchestrator | 2025-06-02 20:13:07 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state STARTED 2025-06-02 20:13:07.855125 | orchestrator | 2025-06-02 20:13:07 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:07.855189 | orchestrator | 2025-06-02 20:13:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:10.887539 | orchestrator | 2025-06-02 20:13:10 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:10.887812 | orchestrator | 2025-06-02 20:13:10 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:10.896808 | orchestrator | 2025-06-02 20:13:10 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:10.896920 | orchestrator | 2025-06-02 20:13:10 | INFO  | Task 5be2023a-6ce3-48f9-8451-dccbfa9276bb is in state SUCCESS 2025-06-02 20:13:10.897601 | orchestrator | 2025-06-02 20:13:10 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:10.897656 | orchestrator | 2025-06-02 20:13:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:13.923064 | orchestrator | 2025-06-02 20:13:13 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:13.927093 | orchestrator | 2025-06-02 20:13:13 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:13.927199 | orchestrator | 2025-06-02 20:13:13 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:13.927214 | orchestrator | 2025-06-02 20:13:13 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:13.927916 | orchestrator | 2025-06-02 20:13:13 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:13.927943 | orchestrator | 2025-06-02 20:13:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:16.958862 | orchestrator | 2025-06-02 20:13:16 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:16.959275 | orchestrator | 2025-06-02 20:13:16 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:16.959850 | orchestrator | 2025-06-02 20:13:16 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:16.962112 | orchestrator | 2025-06-02 20:13:16 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:16.962203 | orchestrator | 2025-06-02 20:13:16 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:16.962215 | orchestrator | 2025-06-02 20:13:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:19.985066 | orchestrator | 2025-06-02 20:13:19 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:19.985237 | orchestrator | 2025-06-02 20:13:19 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:19.985790 | orchestrator | 2025-06-02 20:13:19 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:19.986497 | orchestrator | 2025-06-02 20:13:19 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:19.987983 | orchestrator | 2025-06-02 20:13:19 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:19.989172 | orchestrator | 2025-06-02 20:13:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:23.021796 | orchestrator | 2025-06-02 20:13:23 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:23.021909 | orchestrator | 2025-06-02 20:13:23 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:23.021924 | orchestrator | 2025-06-02 20:13:23 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:23.022422 | orchestrator | 2025-06-02 20:13:23 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:23.023083 | orchestrator | 2025-06-02 20:13:23 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:23.023144 | orchestrator | 2025-06-02 20:13:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:26.051323 | orchestrator | 2025-06-02 20:13:26 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:26.052491 | orchestrator | 2025-06-02 20:13:26 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:26.056092 | orchestrator | 2025-06-02 20:13:26 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:26.056157 | orchestrator | 2025-06-02 20:13:26 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:26.056899 | orchestrator | 2025-06-02 20:13:26 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:26.056921 | orchestrator | 2025-06-02 20:13:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:29.085920 | orchestrator | 2025-06-02 20:13:29 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:29.086784 | orchestrator | 2025-06-02 20:13:29 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:29.086861 | orchestrator | 2025-06-02 20:13:29 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:29.087991 | orchestrator | 2025-06-02 20:13:29 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:29.089985 | orchestrator | 2025-06-02 20:13:29 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:29.090075 | orchestrator | 2025-06-02 20:13:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:32.114779 | orchestrator | 2025-06-02 20:13:32 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:32.116022 | orchestrator | 2025-06-02 20:13:32 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:32.116395 | orchestrator | 2025-06-02 20:13:32 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:32.117430 | orchestrator | 2025-06-02 20:13:32 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:32.117899 | orchestrator | 2025-06-02 20:13:32 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:32.119735 | orchestrator | 2025-06-02 20:13:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:35.141500 | orchestrator | 2025-06-02 20:13:35 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:35.141701 | orchestrator | 2025-06-02 20:13:35 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:35.142093 | orchestrator | 2025-06-02 20:13:35 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:35.143710 | orchestrator | 2025-06-02 20:13:35 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:35.144131 | orchestrator | 2025-06-02 20:13:35 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:35.144162 | orchestrator | 2025-06-02 20:13:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:38.187485 | orchestrator | 2025-06-02 20:13:38 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:38.188272 | orchestrator | 2025-06-02 20:13:38 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:38.188884 | orchestrator | 2025-06-02 20:13:38 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:38.190306 | orchestrator | 2025-06-02 20:13:38 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:38.191720 | orchestrator | 2025-06-02 20:13:38 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:38.191796 | orchestrator | 2025-06-02 20:13:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:41.223399 | orchestrator | 2025-06-02 20:13:41 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:41.224224 | orchestrator | 2025-06-02 20:13:41 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:41.224275 | orchestrator | 2025-06-02 20:13:41 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:41.224286 | orchestrator | 2025-06-02 20:13:41 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:41.224295 | orchestrator | 2025-06-02 20:13:41 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:41.224304 | orchestrator | 2025-06-02 20:13:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:44.271357 | orchestrator | 2025-06-02 20:13:44 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:44.271538 | orchestrator | 2025-06-02 20:13:44 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:44.272016 | orchestrator | 2025-06-02 20:13:44 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:44.272488 | orchestrator | 2025-06-02 20:13:44 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:44.273268 | orchestrator | 2025-06-02 20:13:44 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:44.273294 | orchestrator | 2025-06-02 20:13:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:47.309187 | orchestrator | 2025-06-02 20:13:47 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:47.309299 | orchestrator | 2025-06-02 20:13:47 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:47.309763 | orchestrator | 2025-06-02 20:13:47 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:47.310276 | orchestrator | 2025-06-02 20:13:47 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:47.313019 | orchestrator | 2025-06-02 20:13:47 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:47.313299 | orchestrator | 2025-06-02 20:13:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:50.368954 | orchestrator | 2025-06-02 20:13:50 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:50.369705 | orchestrator | 2025-06-02 20:13:50 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:50.372422 | orchestrator | 2025-06-02 20:13:50 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:50.375251 | orchestrator | 2025-06-02 20:13:50 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state STARTED 2025-06-02 20:13:50.376399 | orchestrator | 2025-06-02 20:13:50 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:50.376433 | orchestrator | 2025-06-02 20:13:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:53.430876 | orchestrator | 2025-06-02 20:13:53 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:53.432241 | orchestrator | 2025-06-02 20:13:53 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:53.436024 | orchestrator | 2025-06-02 20:13:53 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:53.436938 | orchestrator | 2025-06-02 20:13:53 | INFO  | Task 626b0377-bf7b-4330-ac1d-d10cc4cf642b is in state SUCCESS 2025-06-02 20:13:53.437875 | orchestrator | 2025-06-02 20:13:53.437950 | orchestrator | 2025-06-02 20:13:53.437964 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:13:53.437976 | orchestrator | 2025-06-02 20:13:53.437986 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:13:53.437997 | orchestrator | Monday 02 June 2025 20:12:27 +0000 (0:00:00.273) 0:00:00.273 *********** 2025-06-02 20:13:53.438007 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:13:53.438064 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:13:53.438078 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:13:53.438088 | orchestrator | ok: [testbed-manager] 2025-06-02 20:13:53.438097 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:13:53.438107 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:13:53.438117 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:13:53.438126 | orchestrator | 2025-06-02 20:13:53.438136 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:13:53.438173 | orchestrator | Monday 02 June 2025 20:12:28 +0000 (0:00:00.874) 0:00:01.148 *********** 2025-06-02 20:13:53.438184 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-02 20:13:53.438194 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-02 20:13:53.438210 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-02 20:13:53.438226 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-02 20:13:53.438244 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-02 20:13:53.438261 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-02 20:13:53.438277 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-02 20:13:53.438294 | orchestrator | 2025-06-02 20:13:53.438311 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 20:13:53.438327 | orchestrator | 2025-06-02 20:13:53.438343 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-02 20:13:53.438359 | orchestrator | Monday 02 June 2025 20:12:29 +0000 (0:00:01.208) 0:00:02.356 *********** 2025-06-02 20:13:53.438376 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:13:53.438393 | orchestrator | 2025-06-02 20:13:53.438408 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-02 20:13:53.438424 | orchestrator | Monday 02 June 2025 20:12:31 +0000 (0:00:01.674) 0:00:04.031 *********** 2025-06-02 20:13:53.438440 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-02 20:13:53.438456 | orchestrator | 2025-06-02 20:13:53.438472 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-02 20:13:53.438488 | orchestrator | Monday 02 June 2025 20:12:43 +0000 (0:00:12.017) 0:00:16.048 *********** 2025-06-02 20:13:53.438505 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-02 20:13:53.438523 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-02 20:13:53.438570 | orchestrator | 2025-06-02 20:13:53.438586 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-02 20:13:53.438601 | orchestrator | Monday 02 June 2025 20:12:50 +0000 (0:00:07.066) 0:00:23.115 *********** 2025-06-02 20:13:53.438616 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:13:53.438631 | orchestrator | 2025-06-02 20:13:53.438663 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-02 20:13:53.438678 | orchestrator | Monday 02 June 2025 20:12:54 +0000 (0:00:03.533) 0:00:26.648 *********** 2025-06-02 20:13:53.438695 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:13:53.438711 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-02 20:13:53.438726 | orchestrator | 2025-06-02 20:13:53.438741 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-02 20:13:53.438756 | orchestrator | Monday 02 June 2025 20:12:58 +0000 (0:00:04.200) 0:00:30.849 *********** 2025-06-02 20:13:53.438770 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:13:53.438785 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-02 20:13:53.438800 | orchestrator | 2025-06-02 20:13:53.438815 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-02 20:13:53.438830 | orchestrator | Monday 02 June 2025 20:13:05 +0000 (0:00:06.669) 0:00:37.518 *********** 2025-06-02 20:13:53.438845 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-02 20:13:53.438859 | orchestrator | 2025-06-02 20:13:53.438875 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:13:53.438891 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:53.438919 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:53.438937 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:53.438953 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:53.438969 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:53.439005 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:53.439021 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:53.439036 | orchestrator | 2025-06-02 20:13:53.439052 | orchestrator | 2025-06-02 20:13:53.439096 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:13:53.439111 | orchestrator | Monday 02 June 2025 20:13:10 +0000 (0:00:05.124) 0:00:42.642 *********** 2025-06-02 20:13:53.439126 | orchestrator | =============================================================================== 2025-06-02 20:13:53.439141 | orchestrator | service-ks-register : ceph-rgw | Creating services --------------------- 12.02s 2025-06-02 20:13:53.439155 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.07s 2025-06-02 20:13:53.439171 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.67s 2025-06-02 20:13:53.439186 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.12s 2025-06-02 20:13:53.439201 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.20s 2025-06-02 20:13:53.439216 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.53s 2025-06-02 20:13:53.439232 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.67s 2025-06-02 20:13:53.439247 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.21s 2025-06-02 20:13:53.439261 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2025-06-02 20:13:53.439277 | orchestrator | 2025-06-02 20:13:53.439292 | orchestrator | 2025-06-02 20:13:53.439307 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-02 20:13:53.439321 | orchestrator | 2025-06-02 20:13:53.439336 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-02 20:13:53.439351 | orchestrator | Monday 02 June 2025 20:12:20 +0000 (0:00:00.241) 0:00:00.241 *********** 2025-06-02 20:13:53.439366 | orchestrator | changed: [testbed-manager] 2025-06-02 20:13:53.439381 | orchestrator | 2025-06-02 20:13:53.439396 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-02 20:13:53.439412 | orchestrator | Monday 02 June 2025 20:12:22 +0000 (0:00:01.401) 0:00:01.643 *********** 2025-06-02 20:13:53.439426 | orchestrator | changed: [testbed-manager] 2025-06-02 20:13:53.439442 | orchestrator | 2025-06-02 20:13:53.439457 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-02 20:13:53.439471 | orchestrator | Monday 02 June 2025 20:12:23 +0000 (0:00:00.887) 0:00:02.530 *********** 2025-06-02 20:13:53.439486 | orchestrator | changed: [testbed-manager] 2025-06-02 20:13:53.439502 | orchestrator | 2025-06-02 20:13:53.439516 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-02 20:13:53.439532 | orchestrator | Monday 02 June 2025 20:12:24 +0000 (0:00:00.949) 0:00:03.480 *********** 2025-06-02 20:13:53.439574 | orchestrator | changed: [testbed-manager] 2025-06-02 20:13:53.439592 | orchestrator | 2025-06-02 20:13:53.439608 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-02 20:13:53.439624 | orchestrator | Monday 02 June 2025 20:12:25 +0000 (0:00:01.138) 0:00:04.619 *********** 2025-06-02 20:13:53.439651 | orchestrator | changed: [testbed-manager] 2025-06-02 20:13:53.439668 | orchestrator | 2025-06-02 20:13:53.439684 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-02 20:13:53.439701 | orchestrator | Monday 02 June 2025 20:12:26 +0000 (0:00:00.892) 0:00:05.511 *********** 2025-06-02 20:13:53.439728 | orchestrator | changed: [testbed-manager] 2025-06-02 20:13:53.439744 | orchestrator | 2025-06-02 20:13:53.439761 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-02 20:13:53.439777 | orchestrator | Monday 02 June 2025 20:12:27 +0000 (0:00:00.939) 0:00:06.450 *********** 2025-06-02 20:13:53.439794 | orchestrator | changed: [testbed-manager] 2025-06-02 20:13:53.439811 | orchestrator | 2025-06-02 20:13:53.439828 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-02 20:13:53.439845 | orchestrator | Monday 02 June 2025 20:12:28 +0000 (0:00:01.199) 0:00:07.650 *********** 2025-06-02 20:13:53.439862 | orchestrator | changed: [testbed-manager] 2025-06-02 20:13:53.439880 | orchestrator | 2025-06-02 20:13:53.439895 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-02 20:13:53.439912 | orchestrator | Monday 02 June 2025 20:12:29 +0000 (0:00:01.112) 0:00:08.763 *********** 2025-06-02 20:13:53.439929 | orchestrator | changed: [testbed-manager] 2025-06-02 20:13:53.439945 | orchestrator | 2025-06-02 20:13:53.439963 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-02 20:13:53.439979 | orchestrator | Monday 02 June 2025 20:13:26 +0000 (0:00:57.101) 0:01:05.864 *********** 2025-06-02 20:13:53.439996 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:13:53.440014 | orchestrator | 2025-06-02 20:13:53.440030 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 20:13:53.440045 | orchestrator | 2025-06-02 20:13:53.440061 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 20:13:53.440076 | orchestrator | Monday 02 June 2025 20:13:26 +0000 (0:00:00.134) 0:01:05.999 *********** 2025-06-02 20:13:53.440094 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:13:53.440112 | orchestrator | 2025-06-02 20:13:53.440129 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 20:13:53.440148 | orchestrator | 2025-06-02 20:13:53.440184 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 20:13:53.440201 | orchestrator | Monday 02 June 2025 20:13:38 +0000 (0:00:11.572) 0:01:17.572 *********** 2025-06-02 20:13:53.440218 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:13:53.440235 | orchestrator | 2025-06-02 20:13:53.440249 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 20:13:53.440259 | orchestrator | 2025-06-02 20:13:53.440268 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 20:13:53.440281 | orchestrator | Monday 02 June 2025 20:13:49 +0000 (0:00:11.282) 0:01:28.854 *********** 2025-06-02 20:13:53.440298 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:13:53.440314 | orchestrator | 2025-06-02 20:13:53.440346 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:13:53.440363 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 20:13:53.440380 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:53.440397 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:53.440414 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:53.440430 | orchestrator | 2025-06-02 20:13:53.440447 | orchestrator | 2025-06-02 20:13:53.440463 | orchestrator | 2025-06-02 20:13:53.440480 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:13:53.440509 | orchestrator | Monday 02 June 2025 20:13:50 +0000 (0:00:01.159) 0:01:30.014 *********** 2025-06-02 20:13:53.440524 | orchestrator | =============================================================================== 2025-06-02 20:13:53.440568 | orchestrator | Create admin user ------------------------------------------------------ 57.10s 2025-06-02 20:13:53.440586 | orchestrator | Restart ceph manager service ------------------------------------------- 24.01s 2025-06-02 20:13:53.440603 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.40s 2025-06-02 20:13:53.440614 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.20s 2025-06-02 20:13:53.440624 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.14s 2025-06-02 20:13:53.440634 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.11s 2025-06-02 20:13:53.440644 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.95s 2025-06-02 20:13:53.440654 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.94s 2025-06-02 20:13:53.440664 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.89s 2025-06-02 20:13:53.440675 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.89s 2025-06-02 20:13:53.440691 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-06-02 20:13:53.444907 | orchestrator | 2025-06-02 20:13:53 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:53.444983 | orchestrator | 2025-06-02 20:13:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:56.494711 | orchestrator | 2025-06-02 20:13:56 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:56.495909 | orchestrator | 2025-06-02 20:13:56 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:56.496682 | orchestrator | 2025-06-02 20:13:56 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:56.497798 | orchestrator | 2025-06-02 20:13:56 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:56.497828 | orchestrator | 2025-06-02 20:13:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:59.528057 | orchestrator | 2025-06-02 20:13:59 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:13:59.528290 | orchestrator | 2025-06-02 20:13:59 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:13:59.528769 | orchestrator | 2025-06-02 20:13:59 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:13:59.532226 | orchestrator | 2025-06-02 20:13:59 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:13:59.532276 | orchestrator | 2025-06-02 20:13:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:02.558676 | orchestrator | 2025-06-02 20:14:02 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:02.560364 | orchestrator | 2025-06-02 20:14:02 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:02.561864 | orchestrator | 2025-06-02 20:14:02 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:02.562458 | orchestrator | 2025-06-02 20:14:02 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:02.562481 | orchestrator | 2025-06-02 20:14:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:05.599319 | orchestrator | 2025-06-02 20:14:05 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:05.599578 | orchestrator | 2025-06-02 20:14:05 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:05.600884 | orchestrator | 2025-06-02 20:14:05 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:05.601733 | orchestrator | 2025-06-02 20:14:05 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:05.601766 | orchestrator | 2025-06-02 20:14:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:08.632665 | orchestrator | 2025-06-02 20:14:08 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:08.633918 | orchestrator | 2025-06-02 20:14:08 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:08.638614 | orchestrator | 2025-06-02 20:14:08 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:08.639406 | orchestrator | 2025-06-02 20:14:08 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:08.639444 | orchestrator | 2025-06-02 20:14:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:11.675600 | orchestrator | 2025-06-02 20:14:11 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:11.675698 | orchestrator | 2025-06-02 20:14:11 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:11.676278 | orchestrator | 2025-06-02 20:14:11 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:11.676760 | orchestrator | 2025-06-02 20:14:11 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:11.676891 | orchestrator | 2025-06-02 20:14:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:14.718348 | orchestrator | 2025-06-02 20:14:14 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:14.720759 | orchestrator | 2025-06-02 20:14:14 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:14.721026 | orchestrator | 2025-06-02 20:14:14 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:14.721941 | orchestrator | 2025-06-02 20:14:14 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:14.722962 | orchestrator | 2025-06-02 20:14:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:17.755389 | orchestrator | 2025-06-02 20:14:17 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:17.755489 | orchestrator | 2025-06-02 20:14:17 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:17.755504 | orchestrator | 2025-06-02 20:14:17 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:17.755676 | orchestrator | 2025-06-02 20:14:17 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:17.755691 | orchestrator | 2025-06-02 20:14:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:20.785612 | orchestrator | 2025-06-02 20:14:20 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:20.788331 | orchestrator | 2025-06-02 20:14:20 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:20.788394 | orchestrator | 2025-06-02 20:14:20 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:20.788405 | orchestrator | 2025-06-02 20:14:20 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:20.788413 | orchestrator | 2025-06-02 20:14:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:23.817339 | orchestrator | 2025-06-02 20:14:23 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:23.817422 | orchestrator | 2025-06-02 20:14:23 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:23.818306 | orchestrator | 2025-06-02 20:14:23 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:23.818373 | orchestrator | 2025-06-02 20:14:23 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:23.818385 | orchestrator | 2025-06-02 20:14:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:26.865602 | orchestrator | 2025-06-02 20:14:26 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:26.868102 | orchestrator | 2025-06-02 20:14:26 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:26.869997 | orchestrator | 2025-06-02 20:14:26 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:26.871907 | orchestrator | 2025-06-02 20:14:26 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:26.871958 | orchestrator | 2025-06-02 20:14:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:29.910581 | orchestrator | 2025-06-02 20:14:29 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:29.912124 | orchestrator | 2025-06-02 20:14:29 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:29.913847 | orchestrator | 2025-06-02 20:14:29 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:29.915738 | orchestrator | 2025-06-02 20:14:29 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:29.915783 | orchestrator | 2025-06-02 20:14:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:32.963351 | orchestrator | 2025-06-02 20:14:32 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:32.965818 | orchestrator | 2025-06-02 20:14:32 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:32.967287 | orchestrator | 2025-06-02 20:14:32 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:32.968844 | orchestrator | 2025-06-02 20:14:32 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:32.968899 | orchestrator | 2025-06-02 20:14:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:35.995903 | orchestrator | 2025-06-02 20:14:35 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:35.996950 | orchestrator | 2025-06-02 20:14:35 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:35.997661 | orchestrator | 2025-06-02 20:14:35 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:35.998306 | orchestrator | 2025-06-02 20:14:35 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:35.998333 | orchestrator | 2025-06-02 20:14:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:39.045994 | orchestrator | 2025-06-02 20:14:39 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:39.047227 | orchestrator | 2025-06-02 20:14:39 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:39.048449 | orchestrator | 2025-06-02 20:14:39 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:39.049663 | orchestrator | 2025-06-02 20:14:39 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:39.049796 | orchestrator | 2025-06-02 20:14:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:42.091808 | orchestrator | 2025-06-02 20:14:42 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:42.091905 | orchestrator | 2025-06-02 20:14:42 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:42.092077 | orchestrator | 2025-06-02 20:14:42 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:42.093355 | orchestrator | 2025-06-02 20:14:42 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:42.093418 | orchestrator | 2025-06-02 20:14:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:45.151987 | orchestrator | 2025-06-02 20:14:45 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:45.152741 | orchestrator | 2025-06-02 20:14:45 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:45.154288 | orchestrator | 2025-06-02 20:14:45 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:45.156759 | orchestrator | 2025-06-02 20:14:45 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:45.156800 | orchestrator | 2025-06-02 20:14:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:48.224946 | orchestrator | 2025-06-02 20:14:48 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:48.225071 | orchestrator | 2025-06-02 20:14:48 | INFO  | Task d56607ec-b7e5-4c9f-a3fa-7ed1861124ec is in state STARTED 2025-06-02 20:14:48.225091 | orchestrator | 2025-06-02 20:14:48 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:48.228622 | orchestrator | 2025-06-02 20:14:48 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:48.230181 | orchestrator | 2025-06-02 20:14:48 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:48.230228 | orchestrator | 2025-06-02 20:14:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:51.289477 | orchestrator | 2025-06-02 20:14:51 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:51.291145 | orchestrator | 2025-06-02 20:14:51 | INFO  | Task d56607ec-b7e5-4c9f-a3fa-7ed1861124ec is in state STARTED 2025-06-02 20:14:51.294960 | orchestrator | 2025-06-02 20:14:51 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:51.296264 | orchestrator | 2025-06-02 20:14:51 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:51.299947 | orchestrator | 2025-06-02 20:14:51 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:51.299990 | orchestrator | 2025-06-02 20:14:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:54.360007 | orchestrator | 2025-06-02 20:14:54 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:54.360812 | orchestrator | 2025-06-02 20:14:54 | INFO  | Task d56607ec-b7e5-4c9f-a3fa-7ed1861124ec is in state STARTED 2025-06-02 20:14:54.364809 | orchestrator | 2025-06-02 20:14:54 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:54.365985 | orchestrator | 2025-06-02 20:14:54 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:54.370587 | orchestrator | 2025-06-02 20:14:54 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:54.370679 | orchestrator | 2025-06-02 20:14:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:57.421215 | orchestrator | 2025-06-02 20:14:57 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:14:57.422560 | orchestrator | 2025-06-02 20:14:57 | INFO  | Task d56607ec-b7e5-4c9f-a3fa-7ed1861124ec is in state STARTED 2025-06-02 20:14:57.423582 | orchestrator | 2025-06-02 20:14:57 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:14:57.424807 | orchestrator | 2025-06-02 20:14:57 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:14:57.428868 | orchestrator | 2025-06-02 20:14:57 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:14:57.428934 | orchestrator | 2025-06-02 20:14:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:00.481634 | orchestrator | 2025-06-02 20:15:00 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:15:00.483389 | orchestrator | 2025-06-02 20:15:00 | INFO  | Task d56607ec-b7e5-4c9f-a3fa-7ed1861124ec is in state STARTED 2025-06-02 20:15:00.485241 | orchestrator | 2025-06-02 20:15:00 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:00.489990 | orchestrator | 2025-06-02 20:15:00 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:00.492036 | orchestrator | 2025-06-02 20:15:00 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:00.492086 | orchestrator | 2025-06-02 20:15:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:03.542381 | orchestrator | 2025-06-02 20:15:03 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:15:03.543998 | orchestrator | 2025-06-02 20:15:03 | INFO  | Task d56607ec-b7e5-4c9f-a3fa-7ed1861124ec is in state STARTED 2025-06-02 20:15:03.545412 | orchestrator | 2025-06-02 20:15:03 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:03.546993 | orchestrator | 2025-06-02 20:15:03 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:03.548615 | orchestrator | 2025-06-02 20:15:03 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:03.548651 | orchestrator | 2025-06-02 20:15:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:06.600146 | orchestrator | 2025-06-02 20:15:06 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:15:06.600260 | orchestrator | 2025-06-02 20:15:06 | INFO  | Task d56607ec-b7e5-4c9f-a3fa-7ed1861124ec is in state SUCCESS 2025-06-02 20:15:06.600925 | orchestrator | 2025-06-02 20:15:06 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:06.601673 | orchestrator | 2025-06-02 20:15:06 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:06.602386 | orchestrator | 2025-06-02 20:15:06 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:06.604608 | orchestrator | 2025-06-02 20:15:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:09.632129 | orchestrator | 2025-06-02 20:15:09 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:15:09.632587 | orchestrator | 2025-06-02 20:15:09 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:09.633230 | orchestrator | 2025-06-02 20:15:09 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:09.635900 | orchestrator | 2025-06-02 20:15:09 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:09.635950 | orchestrator | 2025-06-02 20:15:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:12.688024 | orchestrator | 2025-06-02 20:15:12 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:15:12.688122 | orchestrator | 2025-06-02 20:15:12 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:12.688387 | orchestrator | 2025-06-02 20:15:12 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:12.689236 | orchestrator | 2025-06-02 20:15:12 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:12.689277 | orchestrator | 2025-06-02 20:15:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:15.717566 | orchestrator | 2025-06-02 20:15:15 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:15:15.718857 | orchestrator | 2025-06-02 20:15:15 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:15.720368 | orchestrator | 2025-06-02 20:15:15 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:15.722643 | orchestrator | 2025-06-02 20:15:15 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:15.722669 | orchestrator | 2025-06-02 20:15:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:18.756556 | orchestrator | 2025-06-02 20:15:18 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:15:18.756638 | orchestrator | 2025-06-02 20:15:18 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:18.757213 | orchestrator | 2025-06-02 20:15:18 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:18.758094 | orchestrator | 2025-06-02 20:15:18 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:18.758126 | orchestrator | 2025-06-02 20:15:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:21.803561 | orchestrator | 2025-06-02 20:15:21 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state STARTED 2025-06-02 20:15:21.803637 | orchestrator | 2025-06-02 20:15:21 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:21.804062 | orchestrator | 2025-06-02 20:15:21 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:21.806296 | orchestrator | 2025-06-02 20:15:21 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:21.806351 | orchestrator | 2025-06-02 20:15:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:24.842587 | orchestrator | 2025-06-02 20:15:24.842869 | orchestrator | None 2025-06-02 20:15:24.842989 | orchestrator | 2025-06-02 20:15:24 | INFO  | Task f3f22c85-e7c0-44af-9c87-1dd907187048 is in state SUCCESS 2025-06-02 20:15:24.843754 | orchestrator | 2025-06-02 20:15:24.843793 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:15:24.843806 | orchestrator | 2025-06-02 20:15:24.843816 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:15:24.843827 | orchestrator | Monday 02 June 2025 20:12:20 +0000 (0:00:00.265) 0:00:00.265 *********** 2025-06-02 20:15:24.843838 | orchestrator | ok: [testbed-manager] 2025-06-02 20:15:24.843850 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:15:24.843860 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:15:24.843870 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:15:24.843881 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:15:24.843919 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:15:24.843932 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:15:24.843941 | orchestrator | 2025-06-02 20:15:24.843952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:15:24.844505 | orchestrator | Monday 02 June 2025 20:12:21 +0000 (0:00:00.740) 0:00:01.005 *********** 2025-06-02 20:15:24.844524 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-02 20:15:24.844534 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-02 20:15:24.844546 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-02 20:15:24.844557 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-02 20:15:24.844568 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-02 20:15:24.844577 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-02 20:15:24.844584 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-02 20:15:24.844590 | orchestrator | 2025-06-02 20:15:24.844597 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-02 20:15:24.844604 | orchestrator | 2025-06-02 20:15:24.844610 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 20:15:24.844616 | orchestrator | Monday 02 June 2025 20:12:22 +0000 (0:00:00.671) 0:00:01.677 *********** 2025-06-02 20:15:24.844624 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:15:24.844632 | orchestrator | 2025-06-02 20:15:24.844639 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-02 20:15:24.844645 | orchestrator | Monday 02 June 2025 20:12:23 +0000 (0:00:01.366) 0:00:03.044 *********** 2025-06-02 20:15:24.844656 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:15:24.844668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.844688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.844695 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.844728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.844736 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.844744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.844750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.844757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.844771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:15:24.844784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.844866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.844875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.844882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.844888 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.844895 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.844901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.844913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.844925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.844935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.844942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.844949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.844956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.844963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.844969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.844980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.844991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.845003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846331 | orchestrator | 2025-06-02 20:15:24.846340 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 20:15:24.846348 | orchestrator | Monday 02 June 2025 20:12:27 +0000 (0:00:03.412) 0:00:06.456 *********** 2025-06-02 20:15:24.846357 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:15:24.846365 | orchestrator | 2025-06-02 20:15:24.846371 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-02 20:15:24.846379 | orchestrator | Monday 02 June 2025 20:12:28 +0000 (0:00:01.606) 0:00:08.063 *********** 2025-06-02 20:15:24.846387 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:15:24.846396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.846421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.846442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.846499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.846508 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.846514 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.846523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.846538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846566 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846599 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846615 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:15:24.846636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846761 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.846815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.846859 | orchestrator | 2025-06-02 20:15:24.846865 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-02 20:15:24.846873 | orchestrator | Monday 02 June 2025 20:12:34 +0000 (0:00:06.161) 0:00:14.224 *********** 2025-06-02 20:15:24.846879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.846886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.846899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.846906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.846917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.846924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.846954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.846962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.846969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.846976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.846994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847037 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 20:15:24.847044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847051 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847069 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847080 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 20:15:24.847089 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847096 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.847105 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.847111 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.847117 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:24.847146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847174 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.847181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847201 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.847211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847254 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.847261 | orchestrator | 2025-06-02 20:15:24.847268 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-02 20:15:24.847275 | orchestrator | Monday 02 June 2025 20:12:36 +0000 (0:00:01.390) 0:00:15.615 *********** 2025-06-02 20:15:24.847283 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 20:15:24.847295 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847303 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847314 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 20:15:24.847323 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847420 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:24.847447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847496 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.847504 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.847511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:24.847552 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.847582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847609 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.847617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847639 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.847650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:24.847657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:24.847698 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.847706 | orchestrator | 2025-06-02 20:15:24.847713 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-02 20:15:24.847720 | orchestrator | Monday 02 June 2025 20:12:37 +0000 (0:00:01.718) 0:00:17.333 *********** 2025-06-02 20:15:24.847727 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:15:24.847734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.847742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.847749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.847763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.847771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.847803 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.847811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.847819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.847825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.847832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.847838 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.847848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.847854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.847886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.847894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.847902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.847910 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:15:24.847918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.847925 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.847941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.847969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.847977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.847984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.847991 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.847998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.848005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.848015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.848027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.848034 | orchestrator | 2025-06-02 20:15:24.848041 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-02 20:15:24.848049 | orchestrator | Monday 02 June 2025 20:12:43 +0000 (0:00:05.696) 0:00:23.030 *********** 2025-06-02 20:15:24.848056 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:15:24.848062 | orchestrator | 2025-06-02 20:15:24.848069 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-02 20:15:24.848080 | orchestrator | Monday 02 June 2025 20:12:44 +0000 (0:00:00.711) 0:00:23.742 *********** 2025-06-02 20:15:24.848088 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319564, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848095 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319564, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848102 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319564, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848110 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319564, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848116 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319564, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.848132 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319555, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848146 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319555, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848153 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319555, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848160 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319564, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848167 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319555, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848174 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319564, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848180 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319536, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848193 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319536, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848230 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319536, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848237 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319537, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.171885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848244 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319536, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848251 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319555, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848257 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319555, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848273 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319537, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.171885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848283 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319537, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.171885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848291 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319555, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.848317 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319551, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848325 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319551, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848332 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319536, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848338 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319542, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848354 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319537, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.171885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848368 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319551, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848375 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319536, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848399 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319542, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848407 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319537, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.171885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848414 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319537, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.171885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848420 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319550, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848433 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319542, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848444 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319551, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848451 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319536, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.848541 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319551, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848553 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319551, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848561 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319557, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848575 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319550, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848583 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319550, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848594 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319542, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848602 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319562, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848630 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319542, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848639 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319542, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848647 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319557, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848661 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319557, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848668 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319550, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848680 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319550, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848688 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319575, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848717 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319537, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.171885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.848726 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319562, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848733 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319550, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848746 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319562, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848753 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319557, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848764 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319558, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848772 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319557, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848799 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319575, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848808 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319557, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848816 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319575, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848829 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319562, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848836 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319540, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.172885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848846 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319562, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848853 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319551, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.848880 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319562, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848888 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319575, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848901 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319558, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848908 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319558, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848914 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319549, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848924 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319575, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848932 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319558, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848961 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319540, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.172885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848971 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319575, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848983 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319540, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.172885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848991 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319540, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.172885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.848999 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319542, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849006 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319535, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849017 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319558, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849030 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319558, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849038 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319549, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849054 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319549, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849061 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319549, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849068 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319540, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.172885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849075 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319552, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849085 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319540, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.172885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849098 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319535, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849106 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319549, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849118 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319549, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849125 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319535, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849132 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319535, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849139 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319550, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849149 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319535, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849160 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319574, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849172 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319552, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849180 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319535, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849186 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319552, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849193 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319552, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849200 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319552, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849210 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319545, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849222 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319574, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849235 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319574, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849242 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319552, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849249 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319574, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849257 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319545, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849264 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319545, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849275 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319574, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849287 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319557, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849299 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319566, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1788852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849307 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.849314 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319574, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849321 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319566, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1788852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849329 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319545, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849336 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.849342 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319566, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1788852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849349 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.849359 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319545, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849376 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319566, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1788852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849382 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.849388 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319545, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849395 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319566, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1788852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849402 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.849409 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319566, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1788852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:24.849415 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.849421 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319562, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.177885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849430 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319575, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849442 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319558, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1768851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849453 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319540, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.172885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849484 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319549, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.174885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849491 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319535, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.170885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319552, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1758852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849504 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319574, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.180885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849514 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319545, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.173885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849526 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319566, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1788852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:24.849532 | orchestrator | 2025-06-02 20:15:24.849539 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-02 20:15:24.849546 | orchestrator | Monday 02 June 2025 20:13:05 +0000 (0:00:21.231) 0:00:44.973 *********** 2025-06-02 20:15:24.849552 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:15:24.849558 | orchestrator | 2025-06-02 20:15:24.849565 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-02 20:15:24.849575 | orchestrator | Monday 02 June 2025 20:13:06 +0000 (0:00:00.859) 0:00:45.832 *********** 2025-06-02 20:15:24.849582 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:24.849589 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849597 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:24.849603 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849610 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-02 20:15:24.849618 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:15:24.849624 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:24.849630 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849637 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:24.849644 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849650 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-02 20:15:24.849658 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:24.849665 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849672 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:24.849679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849686 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-02 20:15:24.849693 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:24.849700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849707 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:24.849715 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849721 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-02 20:15:24.849728 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:24.849736 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849743 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:24.849751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849758 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-02 20:15:24.849765 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:24.849772 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849779 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:24.849796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849803 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-02 20:15:24.849810 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:24.849817 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849824 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:24.849830 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:24.849836 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-02 20:15:24.849843 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:15:24.849849 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 20:15:24.849854 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 20:15:24.849860 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 20:15:24.849865 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 20:15:24.849871 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 20:15:24.849877 | orchestrator | 2025-06-02 20:15:24.849883 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-02 20:15:24.849890 | orchestrator | Monday 02 June 2025 20:13:08 +0000 (0:00:02.377) 0:00:48.210 *********** 2025-06-02 20:15:24.849896 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:24.849903 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:24.849910 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:24.849920 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:24.849927 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.849934 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.849941 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.849947 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.849953 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:24.849960 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.849967 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:24.849974 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.849980 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-02 20:15:24.849988 | orchestrator | 2025-06-02 20:15:24.849995 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-02 20:15:24.850001 | orchestrator | Monday 02 June 2025 20:13:23 +0000 (0:00:14.618) 0:01:02.828 *********** 2025-06-02 20:15:24.850007 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:24.850013 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.850150 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:24.850161 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.850168 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:24.850175 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.850181 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:24.850188 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.850195 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:24.850202 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.850209 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:24.850216 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.850234 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-02 20:15:24.850243 | orchestrator | 2025-06-02 20:15:24.850250 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-02 20:15:24.850257 | orchestrator | Monday 02 June 2025 20:13:26 +0000 (0:00:03.132) 0:01:05.961 *********** 2025-06-02 20:15:24.850265 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:24.850273 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.850280 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:24.850288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.850296 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-02 20:15:24.850305 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:24.850312 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.850319 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:24.850326 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.850333 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:24.850340 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.850348 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:24.850355 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.850362 | orchestrator | 2025-06-02 20:15:24.850369 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-02 20:15:24.850377 | orchestrator | Monday 02 June 2025 20:13:28 +0000 (0:00:02.197) 0:01:08.158 *********** 2025-06-02 20:15:24.850385 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:15:24.850392 | orchestrator | 2025-06-02 20:15:24.850399 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-02 20:15:24.850407 | orchestrator | Monday 02 June 2025 20:13:29 +0000 (0:00:00.604) 0:01:08.763 *********** 2025-06-02 20:15:24.850414 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:24.850421 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.850428 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.850435 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.850443 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.850451 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.850479 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.850487 | orchestrator | 2025-06-02 20:15:24.850495 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-02 20:15:24.850502 | orchestrator | Monday 02 June 2025 20:13:30 +0000 (0:00:00.756) 0:01:09.519 *********** 2025-06-02 20:15:24.850509 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:24.850517 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.850524 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.850532 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:24.850539 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.850546 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:24.850552 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:24.850560 | orchestrator | 2025-06-02 20:15:24.850567 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-02 20:15:24.850575 | orchestrator | Monday 02 June 2025 20:13:33 +0000 (0:00:02.910) 0:01:12.430 *********** 2025-06-02 20:15:24.850583 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:24.850597 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:24.850604 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:24.850612 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:24.850619 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:24.850627 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.850634 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.850642 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.850649 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:24.850662 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.850671 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:24.850678 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.850684 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:24.850691 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.850698 | orchestrator | 2025-06-02 20:15:24.850705 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-02 20:15:24.850712 | orchestrator | Monday 02 June 2025 20:13:35 +0000 (0:00:02.118) 0:01:14.548 *********** 2025-06-02 20:15:24.850720 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:24.850728 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.850735 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-02 20:15:24.850742 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:24.850835 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.850861 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:24.850868 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.850875 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:24.850881 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.850887 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:24.850894 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.850901 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:24.850907 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.850914 | orchestrator | 2025-06-02 20:15:24.850922 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-02 20:15:24.850929 | orchestrator | Monday 02 June 2025 20:13:37 +0000 (0:00:02.031) 0:01:16.580 *********** 2025-06-02 20:15:24.850936 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:24.850944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-02 20:15:24.850950 | orchestrator | due to this access issue: 2025-06-02 20:15:24.850957 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-02 20:15:24.850964 | orchestrator | not a directory 2025-06-02 20:15:24.850972 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:15:24.850978 | orchestrator | 2025-06-02 20:15:24.850985 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-02 20:15:24.850991 | orchestrator | Monday 02 June 2025 20:13:38 +0000 (0:00:01.242) 0:01:17.822 *********** 2025-06-02 20:15:24.850997 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:24.851004 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.851018 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.851025 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.851032 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.851038 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.851045 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.851051 | orchestrator | 2025-06-02 20:15:24.851058 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-02 20:15:24.851065 | orchestrator | Monday 02 June 2025 20:13:39 +0000 (0:00:01.020) 0:01:18.842 *********** 2025-06-02 20:15:24.851071 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:24.851078 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:24.851085 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:24.851092 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:24.851099 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:24.851105 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:24.851112 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:24.851118 | orchestrator | 2025-06-02 20:15:24.851125 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-02 20:15:24.851132 | orchestrator | Monday 02 June 2025 20:13:40 +0000 (0:00:00.689) 0:01:19.532 *********** 2025-06-02 20:15:24.851145 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:15:24.851163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.851171 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.851178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.851185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.851198 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.851205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.851216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.851224 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.851236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.851243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.851250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:24.851257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.851269 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:15:24.851281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.851289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.851301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.851309 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.851316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.851329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.851335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.851342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.851352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.851359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.851370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.851378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:24.851385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.851398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.851405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:24.851412 | orchestrator | 2025-06-02 20:15:24.851418 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-02 20:15:24.851425 | orchestrator | Monday 02 June 2025 20:13:44 +0000 (0:00:04.608) 0:01:24.140 *********** 2025-06-02 20:15:24.851431 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 20:15:24.851438 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:24.851445 | orchestrator | 2025-06-02 20:15:24.851451 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:24.851457 | orchestrator | Monday 02 June 2025 20:13:46 +0000 (0:00:01.414) 0:01:25.555 *********** 2025-06-02 20:15:24.851491 | orchestrator | 2025-06-02 20:15:24.851498 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:24.851506 | orchestrator | Monday 02 June 2025 20:13:46 +0000 (0:00:00.272) 0:01:25.828 *********** 2025-06-02 20:15:24.851513 | orchestrator | 2025-06-02 20:15:24.851519 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:24.851526 | orchestrator | Monday 02 June 2025 20:13:46 +0000 (0:00:00.068) 0:01:25.896 *********** 2025-06-02 20:15:24.851534 | orchestrator | 2025-06-02 20:15:24.851541 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:24.851553 | orchestrator | Monday 02 June 2025 20:13:46 +0000 (0:00:00.076) 0:01:25.973 *********** 2025-06-02 20:15:24.851560 | orchestrator | 2025-06-02 20:15:24.851567 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:24.851575 | orchestrator | Monday 02 June 2025 20:13:46 +0000 (0:00:00.066) 0:01:26.039 *********** 2025-06-02 20:15:24.851582 | orchestrator | 2025-06-02 20:15:24.851590 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:24.851597 | orchestrator | Monday 02 June 2025 20:13:46 +0000 (0:00:00.061) 0:01:26.101 *********** 2025-06-02 20:15:24.851604 | orchestrator | 2025-06-02 20:15:24.851612 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:24.851619 | orchestrator | Monday 02 June 2025 20:13:46 +0000 (0:00:00.087) 0:01:26.189 *********** 2025-06-02 20:15:24.851626 | orchestrator | 2025-06-02 20:15:24.851634 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-02 20:15:24.851641 | orchestrator | Monday 02 June 2025 20:13:46 +0000 (0:00:00.101) 0:01:26.290 *********** 2025-06-02 20:15:24.851648 | orchestrator | changed: [testbed-manager] 2025-06-02 20:15:24.851656 | orchestrator | 2025-06-02 20:15:24.851662 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-02 20:15:24.851669 | orchestrator | Monday 02 June 2025 20:14:01 +0000 (0:00:15.061) 0:01:41.352 *********** 2025-06-02 20:15:24.851690 | orchestrator | changed: [testbed-manager] 2025-06-02 20:15:24.851698 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:24.851705 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:24.851713 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:24.851720 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:15:24.851727 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:15:24.851733 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:15:24.851741 | orchestrator | 2025-06-02 20:15:24.851748 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-02 20:15:24.851755 | orchestrator | Monday 02 June 2025 20:14:16 +0000 (0:00:14.974) 0:01:56.326 *********** 2025-06-02 20:15:24.851762 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:24.851769 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:24.851777 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:24.851784 | orchestrator | 2025-06-02 20:15:24.851791 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-02 20:15:24.851799 | orchestrator | Monday 02 June 2025 20:14:27 +0000 (0:00:10.959) 0:02:07.286 *********** 2025-06-02 20:15:24.851806 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:24.851813 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:24.851820 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:24.851828 | orchestrator | 2025-06-02 20:15:24.851835 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-02 20:15:24.851841 | orchestrator | Monday 02 June 2025 20:14:33 +0000 (0:00:05.442) 0:02:12.728 *********** 2025-06-02 20:15:24.851846 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:24.851853 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:24.851858 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:15:24.851865 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:15:24.851871 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:15:24.851878 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:24.851885 | orchestrator | changed: [testbed-manager] 2025-06-02 20:15:24.851892 | orchestrator | 2025-06-02 20:15:24.851900 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-02 20:15:24.851907 | orchestrator | Monday 02 June 2025 20:14:47 +0000 (0:00:13.982) 0:02:26.711 *********** 2025-06-02 20:15:24.851914 | orchestrator | changed: [testbed-manager] 2025-06-02 20:15:24.851921 | orchestrator | 2025-06-02 20:15:24.851928 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-02 20:15:24.851936 | orchestrator | Monday 02 June 2025 20:14:54 +0000 (0:00:07.032) 0:02:33.743 *********** 2025-06-02 20:15:24.851944 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:24.851951 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:24.851958 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:24.851965 | orchestrator | 2025-06-02 20:15:24.851972 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-02 20:15:24.851979 | orchestrator | Monday 02 June 2025 20:15:05 +0000 (0:00:11.456) 0:02:45.200 *********** 2025-06-02 20:15:24.851986 | orchestrator | changed: [testbed-manager] 2025-06-02 20:15:24.851993 | orchestrator | 2025-06-02 20:15:24.852000 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-02 20:15:24.852006 | orchestrator | Monday 02 June 2025 20:15:11 +0000 (0:00:05.250) 0:02:50.450 *********** 2025-06-02 20:15:24.852013 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:15:24.852020 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:15:24.852027 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:15:24.852034 | orchestrator | 2025-06-02 20:15:24.852042 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:15:24.852049 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 20:15:24.852057 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:15:24.852071 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:15:24.852078 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:15:24.852085 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 20:15:24.852097 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 20:15:24.852105 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 20:15:24.852112 | orchestrator | 2025-06-02 20:15:24.852119 | orchestrator | 2025-06-02 20:15:24.852127 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:15:24.852134 | orchestrator | Monday 02 June 2025 20:15:22 +0000 (0:00:11.225) 0:03:01.675 *********** 2025-06-02 20:15:24.852141 | orchestrator | =============================================================================== 2025-06-02 20:15:24.852148 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.23s 2025-06-02 20:15:24.852155 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.06s 2025-06-02 20:15:24.852163 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.97s 2025-06-02 20:15:24.852169 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.62s 2025-06-02 20:15:24.852176 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.98s 2025-06-02 20:15:24.852187 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.46s 2025-06-02 20:15:24.852195 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.23s 2025-06-02 20:15:24.852202 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.96s 2025-06-02 20:15:24.852209 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.03s 2025-06-02 20:15:24.852216 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.16s 2025-06-02 20:15:24.852223 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.70s 2025-06-02 20:15:24.852230 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.44s 2025-06-02 20:15:24.852236 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.25s 2025-06-02 20:15:24.852242 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.61s 2025-06-02 20:15:24.852249 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.41s 2025-06-02 20:15:24.852256 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.13s 2025-06-02 20:15:24.852263 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.91s 2025-06-02 20:15:24.852269 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.38s 2025-06-02 20:15:24.852276 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.20s 2025-06-02 20:15:24.852282 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.12s 2025-06-02 20:15:24.852288 | orchestrator | 2025-06-02 20:15:24 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:24.852295 | orchestrator | 2025-06-02 20:15:24 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:24.852301 | orchestrator | 2025-06-02 20:15:24 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:24.852319 | orchestrator | 2025-06-02 20:15:24 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:24.852326 | orchestrator | 2025-06-02 20:15:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:27.890893 | orchestrator | 2025-06-02 20:15:27 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:27.892325 | orchestrator | 2025-06-02 20:15:27 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:27.893591 | orchestrator | 2025-06-02 20:15:27 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:27.894698 | orchestrator | 2025-06-02 20:15:27 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:27.894806 | orchestrator | 2025-06-02 20:15:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:30.958556 | orchestrator | 2025-06-02 20:15:30 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:30.959121 | orchestrator | 2025-06-02 20:15:30 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:30.959856 | orchestrator | 2025-06-02 20:15:30 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:30.960794 | orchestrator | 2025-06-02 20:15:30 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:30.961098 | orchestrator | 2025-06-02 20:15:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:34.007312 | orchestrator | 2025-06-02 20:15:34 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:34.013431 | orchestrator | 2025-06-02 20:15:34 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:34.013615 | orchestrator | 2025-06-02 20:15:34 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:34.014706 | orchestrator | 2025-06-02 20:15:34 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state STARTED 2025-06-02 20:15:34.014757 | orchestrator | 2025-06-02 20:15:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:37.054099 | orchestrator | 2025-06-02 20:15:37 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:37.054469 | orchestrator | 2025-06-02 20:15:37 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:37.055378 | orchestrator | 2025-06-02 20:15:37 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:37.056560 | orchestrator | 2025-06-02 20:15:37 | INFO  | Task 2d96f152-0242-4188-a1c5-f028a73245b4 is in state SUCCESS 2025-06-02 20:15:37.058256 | orchestrator | 2025-06-02 20:15:37.058362 | orchestrator | 2025-06-02 20:15:37.058392 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:15:37.058414 | orchestrator | 2025-06-02 20:15:37.058432 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:15:37.058483 | orchestrator | Monday 02 June 2025 20:12:27 +0000 (0:00:00.312) 0:00:00.312 *********** 2025-06-02 20:15:37.058503 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:15:37.058524 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:15:37.058543 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:15:37.058561 | orchestrator | 2025-06-02 20:15:37.058580 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:15:37.058599 | orchestrator | Monday 02 June 2025 20:12:28 +0000 (0:00:00.377) 0:00:00.689 *********** 2025-06-02 20:15:37.058620 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-02 20:15:37.058640 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-02 20:15:37.058661 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-02 20:15:37.058720 | orchestrator | 2025-06-02 20:15:37.058741 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-02 20:15:37.058760 | orchestrator | 2025-06-02 20:15:37.058779 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 20:15:37.058796 | orchestrator | Monday 02 June 2025 20:12:28 +0000 (0:00:00.474) 0:00:01.164 *********** 2025-06-02 20:15:37.058808 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:15:37.058820 | orchestrator | 2025-06-02 20:15:37.058832 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-02 20:15:37.058843 | orchestrator | Monday 02 June 2025 20:12:29 +0000 (0:00:00.833) 0:00:01.997 *********** 2025-06-02 20:15:37.058854 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-02 20:15:37.058865 | orchestrator | 2025-06-02 20:15:37.058876 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-02 20:15:37.058888 | orchestrator | Monday 02 June 2025 20:12:41 +0000 (0:00:12.482) 0:00:14.480 *********** 2025-06-02 20:15:37.058905 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-02 20:15:37.058924 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-02 20:15:37.058942 | orchestrator | 2025-06-02 20:15:37.058959 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-02 20:15:37.058977 | orchestrator | Monday 02 June 2025 20:12:49 +0000 (0:00:07.651) 0:00:22.131 *********** 2025-06-02 20:15:37.058997 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-02 20:15:37.059015 | orchestrator | 2025-06-02 20:15:37.059033 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-02 20:15:37.059051 | orchestrator | Monday 02 June 2025 20:12:53 +0000 (0:00:03.517) 0:00:25.649 *********** 2025-06-02 20:15:37.059071 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:15:37.059090 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-02 20:15:37.059110 | orchestrator | 2025-06-02 20:15:37.059129 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-02 20:15:37.059147 | orchestrator | Monday 02 June 2025 20:12:57 +0000 (0:00:04.077) 0:00:29.726 *********** 2025-06-02 20:15:37.059163 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:15:37.059175 | orchestrator | 2025-06-02 20:15:37.059187 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-02 20:15:37.059205 | orchestrator | Monday 02 June 2025 20:13:00 +0000 (0:00:03.222) 0:00:32.948 *********** 2025-06-02 20:15:37.059224 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-02 20:15:37.059241 | orchestrator | 2025-06-02 20:15:37.059259 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-02 20:15:37.059278 | orchestrator | Monday 02 June 2025 20:13:04 +0000 (0:00:04.661) 0:00:37.610 *********** 2025-06-02 20:15:37.059353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.059391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.059411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.059430 | orchestrator | 2025-06-02 20:15:37.059442 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 20:15:37.059483 | orchestrator | Monday 02 June 2025 20:13:09 +0000 (0:00:04.832) 0:00:42.442 *********** 2025-06-02 20:15:37.059495 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:15:37.059507 | orchestrator | 2025-06-02 20:15:37.059528 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-02 20:15:37.059539 | orchestrator | Monday 02 June 2025 20:13:10 +0000 (0:00:00.913) 0:00:43.355 *********** 2025-06-02 20:15:37.059551 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:37.059562 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:37.059573 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:37.059584 | orchestrator | 2025-06-02 20:15:37.059595 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-02 20:15:37.059606 | orchestrator | Monday 02 June 2025 20:13:14 +0000 (0:00:03.885) 0:00:47.241 *********** 2025-06-02 20:15:37.059621 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:15:37.059641 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:15:37.059661 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:15:37.059678 | orchestrator | 2025-06-02 20:15:37.059696 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-02 20:15:37.059714 | orchestrator | Monday 02 June 2025 20:13:16 +0000 (0:00:01.448) 0:00:48.689 *********** 2025-06-02 20:15:37.059732 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:15:37.059751 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:15:37.059769 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:15:37.059788 | orchestrator | 2025-06-02 20:15:37.059806 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-02 20:15:37.059823 | orchestrator | Monday 02 June 2025 20:13:17 +0000 (0:00:01.154) 0:00:49.844 *********** 2025-06-02 20:15:37.059842 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:15:37.059861 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:15:37.059880 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:15:37.059898 | orchestrator | 2025-06-02 20:15:37.059916 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-02 20:15:37.059936 | orchestrator | Monday 02 June 2025 20:13:18 +0000 (0:00:00.847) 0:00:50.691 *********** 2025-06-02 20:15:37.059955 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.059972 | orchestrator | 2025-06-02 20:15:37.059989 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-02 20:15:37.060000 | orchestrator | Monday 02 June 2025 20:13:18 +0000 (0:00:00.106) 0:00:50.798 *********** 2025-06-02 20:15:37.060011 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.060022 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.060033 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.060044 | orchestrator | 2025-06-02 20:15:37.060055 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 20:15:37.060065 | orchestrator | Monday 02 June 2025 20:13:18 +0000 (0:00:00.261) 0:00:51.060 *********** 2025-06-02 20:15:37.060076 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:15:37.060088 | orchestrator | 2025-06-02 20:15:37.060099 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-02 20:15:37.060121 | orchestrator | Monday 02 June 2025 20:13:18 +0000 (0:00:00.479) 0:00:51.540 *********** 2025-06-02 20:15:37.060153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.060170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.060198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.060229 | orchestrator | 2025-06-02 20:15:37.060248 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-02 20:15:37.060268 | orchestrator | Monday 02 June 2025 20:13:22 +0000 (0:00:03.599) 0:00:55.139 *********** 2025-06-02 20:15:37.060292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:15:37.060305 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.060317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:15:37.060336 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.060362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:15:37.060375 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.060387 | orchestrator | 2025-06-02 20:15:37.060398 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-02 20:15:37.060409 | orchestrator | Monday 02 June 2025 20:13:25 +0000 (0:00:03.162) 0:00:58.302 *********** 2025-06-02 20:15:37.060421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:15:37.060439 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.060598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:15:37.060619 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.060630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:15:37.060652 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.060663 | orchestrator | 2025-06-02 20:15:37.060674 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-02 20:15:37.060685 | orchestrator | Monday 02 June 2025 20:13:29 +0000 (0:00:03.887) 0:01:02.190 *********** 2025-06-02 20:15:37.060696 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.060707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.060718 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.060728 | orchestrator | 2025-06-02 20:15:37.060739 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-02 20:15:37.060750 | orchestrator | Monday 02 June 2025 20:13:35 +0000 (0:00:05.538) 0:01:07.728 *********** 2025-06-02 20:15:37.060782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.060796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.060821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.060833 | orchestrator | 2025-06-02 20:15:37.060845 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-02 20:15:37.060856 | orchestrator | Monday 02 June 2025 20:13:40 +0000 (0:00:05.076) 0:01:12.805 *********** 2025-06-02 20:15:37.060867 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:37.060878 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:37.060889 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:37.060899 | orchestrator | 2025-06-02 20:15:37.060910 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-02 20:15:37.060921 | orchestrator | Monday 02 June 2025 20:13:47 +0000 (0:00:07.332) 0:01:20.137 *********** 2025-06-02 20:15:37.060932 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.060943 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.060955 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.060965 | orchestrator | 2025-06-02 20:15:37.060977 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-02 20:15:37.060994 | orchestrator | Monday 02 June 2025 20:13:52 +0000 (0:00:04.515) 0:01:24.653 *********** 2025-06-02 20:15:37.061005 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.061016 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.061028 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.061038 | orchestrator | 2025-06-02 20:15:37.061049 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-02 20:15:37.061060 | orchestrator | Monday 02 June 2025 20:13:57 +0000 (0:00:05.739) 0:01:30.393 *********** 2025-06-02 20:15:37.061071 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.061082 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.061093 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.061104 | orchestrator | 2025-06-02 20:15:37.061115 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-02 20:15:37.061125 | orchestrator | Monday 02 June 2025 20:14:03 +0000 (0:00:06.130) 0:01:36.523 *********** 2025-06-02 20:15:37.061143 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.061154 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.061165 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.061176 | orchestrator | 2025-06-02 20:15:37.061186 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-02 20:15:37.061197 | orchestrator | Monday 02 June 2025 20:14:12 +0000 (0:00:08.910) 0:01:45.434 *********** 2025-06-02 20:15:37.061208 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.061219 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.061230 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.061241 | orchestrator | 2025-06-02 20:15:37.061252 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-02 20:15:37.061263 | orchestrator | Monday 02 June 2025 20:14:13 +0000 (0:00:00.456) 0:01:45.890 *********** 2025-06-02 20:15:37.061274 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 20:15:37.061285 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.061296 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 20:15:37.061307 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.061318 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 20:15:37.061329 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.061340 | orchestrator | 2025-06-02 20:15:37.061351 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-02 20:15:37.061362 | orchestrator | Monday 02 June 2025 20:14:19 +0000 (0:00:05.923) 0:01:51.814 *********** 2025-06-02 20:15:37.061379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.061402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.061422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:15:37.061439 | orchestrator | 2025-06-02 20:15:37.061483 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 20:15:37.061501 | orchestrator | Monday 02 June 2025 20:14:23 +0000 (0:00:04.230) 0:01:56.044 *********** 2025-06-02 20:15:37.061518 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:37.061536 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:37.061554 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:37.061573 | orchestrator | 2025-06-02 20:15:37.061593 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-02 20:15:37.061612 | orchestrator | Monday 02 June 2025 20:14:23 +0000 (0:00:00.274) 0:01:56.319 *********** 2025-06-02 20:15:37.061631 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:37.061650 | orchestrator | 2025-06-02 20:15:37.061669 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-02 20:15:37.061696 | orchestrator | Monday 02 June 2025 20:14:26 +0000 (0:00:02.340) 0:01:58.659 *********** 2025-06-02 20:15:37.061713 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:37.061724 | orchestrator | 2025-06-02 20:15:37.061735 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-02 20:15:37.061756 | orchestrator | Monday 02 June 2025 20:14:28 +0000 (0:00:02.516) 0:02:01.176 *********** 2025-06-02 20:15:37.061768 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:37.061779 | orchestrator | 2025-06-02 20:15:37.061789 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-02 20:15:37.061801 | orchestrator | Monday 02 June 2025 20:14:30 +0000 (0:00:02.236) 0:02:03.412 *********** 2025-06-02 20:15:37.061811 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:37.061822 | orchestrator | 2025-06-02 20:15:37.061833 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-02 20:15:37.061845 | orchestrator | Monday 02 June 2025 20:15:00 +0000 (0:00:30.173) 0:02:33.586 *********** 2025-06-02 20:15:37.061856 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:37.061867 | orchestrator | 2025-06-02 20:15:37.061887 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 20:15:37.061899 | orchestrator | Monday 02 June 2025 20:15:03 +0000 (0:00:02.385) 0:02:35.972 *********** 2025-06-02 20:15:37.061909 | orchestrator | 2025-06-02 20:15:37.061920 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 20:15:37.061931 | orchestrator | Monday 02 June 2025 20:15:03 +0000 (0:00:00.057) 0:02:36.030 *********** 2025-06-02 20:15:37.061942 | orchestrator | 2025-06-02 20:15:37.061953 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 20:15:37.061964 | orchestrator | Monday 02 June 2025 20:15:03 +0000 (0:00:00.058) 0:02:36.088 *********** 2025-06-02 20:15:37.061975 | orchestrator | 2025-06-02 20:15:37.061986 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-02 20:15:37.061996 | orchestrator | Monday 02 June 2025 20:15:03 +0000 (0:00:00.067) 0:02:36.156 *********** 2025-06-02 20:15:37.062007 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:37.062074 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:37.062090 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:37.062101 | orchestrator | 2025-06-02 20:15:37.062112 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:15:37.062125 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 20:15:37.062138 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:15:37.062149 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:15:37.062160 | orchestrator | 2025-06-02 20:15:37.062171 | orchestrator | 2025-06-02 20:15:37.062182 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:15:37.062193 | orchestrator | Monday 02 June 2025 20:15:36 +0000 (0:00:32.704) 0:03:08.860 *********** 2025-06-02 20:15:37.062204 | orchestrator | =============================================================================== 2025-06-02 20:15:37.062215 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.70s 2025-06-02 20:15:37.062226 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.17s 2025-06-02 20:15:37.062237 | orchestrator | service-ks-register : glance | Creating services ----------------------- 12.48s 2025-06-02 20:15:37.062248 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 8.91s 2025-06-02 20:15:37.062259 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.65s 2025-06-02 20:15:37.062271 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.33s 2025-06-02 20:15:37.062282 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.13s 2025-06-02 20:15:37.062293 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.92s 2025-06-02 20:15:37.062303 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.74s 2025-06-02 20:15:37.062323 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.54s 2025-06-02 20:15:37.062342 | orchestrator | glance : Copying over config.json files for services -------------------- 5.08s 2025-06-02 20:15:37.062362 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.83s 2025-06-02 20:15:37.062381 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.66s 2025-06-02 20:15:37.062400 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.52s 2025-06-02 20:15:37.062418 | orchestrator | glance : Check glance containers ---------------------------------------- 4.23s 2025-06-02 20:15:37.062436 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.08s 2025-06-02 20:15:37.062480 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.89s 2025-06-02 20:15:37.062500 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.89s 2025-06-02 20:15:37.062518 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.60s 2025-06-02 20:15:37.062536 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.52s 2025-06-02 20:15:37.062555 | orchestrator | 2025-06-02 20:15:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:40.089623 | orchestrator | 2025-06-02 20:15:40 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:40.090952 | orchestrator | 2025-06-02 20:15:40 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:40.091915 | orchestrator | 2025-06-02 20:15:40 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:40.093491 | orchestrator | 2025-06-02 20:15:40 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:15:40.093634 | orchestrator | 2025-06-02 20:15:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:43.140682 | orchestrator | 2025-06-02 20:15:43 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:43.142221 | orchestrator | 2025-06-02 20:15:43 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:43.143725 | orchestrator | 2025-06-02 20:15:43 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:43.146196 | orchestrator | 2025-06-02 20:15:43 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:15:43.146228 | orchestrator | 2025-06-02 20:15:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:46.182322 | orchestrator | 2025-06-02 20:15:46 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:46.186744 | orchestrator | 2025-06-02 20:15:46 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:46.187060 | orchestrator | 2025-06-02 20:15:46 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:46.187932 | orchestrator | 2025-06-02 20:15:46 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:15:46.187974 | orchestrator | 2025-06-02 20:15:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:49.227964 | orchestrator | 2025-06-02 20:15:49 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:49.229753 | orchestrator | 2025-06-02 20:15:49 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:49.232366 | orchestrator | 2025-06-02 20:15:49 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:49.236217 | orchestrator | 2025-06-02 20:15:49 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:15:49.236288 | orchestrator | 2025-06-02 20:15:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:52.283076 | orchestrator | 2025-06-02 20:15:52 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:52.284512 | orchestrator | 2025-06-02 20:15:52 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:52.286181 | orchestrator | 2025-06-02 20:15:52 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:52.288394 | orchestrator | 2025-06-02 20:15:52 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:15:52.288629 | orchestrator | 2025-06-02 20:15:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:55.337309 | orchestrator | 2025-06-02 20:15:55 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:55.339162 | orchestrator | 2025-06-02 20:15:55 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:55.341002 | orchestrator | 2025-06-02 20:15:55 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:55.343313 | orchestrator | 2025-06-02 20:15:55 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:15:55.343348 | orchestrator | 2025-06-02 20:15:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:58.385882 | orchestrator | 2025-06-02 20:15:58 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:15:58.387083 | orchestrator | 2025-06-02 20:15:58 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:15:58.388165 | orchestrator | 2025-06-02 20:15:58 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:15:58.389407 | orchestrator | 2025-06-02 20:15:58 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:15:58.389535 | orchestrator | 2025-06-02 20:15:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:01.443062 | orchestrator | 2025-06-02 20:16:01 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:01.443172 | orchestrator | 2025-06-02 20:16:01 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:01.444913 | orchestrator | 2025-06-02 20:16:01 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:01.446945 | orchestrator | 2025-06-02 20:16:01 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:01.447000 | orchestrator | 2025-06-02 20:16:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:04.485909 | orchestrator | 2025-06-02 20:16:04 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:04.487655 | orchestrator | 2025-06-02 20:16:04 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:04.488744 | orchestrator | 2025-06-02 20:16:04 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:04.491075 | orchestrator | 2025-06-02 20:16:04 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:04.491142 | orchestrator | 2025-06-02 20:16:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:07.537045 | orchestrator | 2025-06-02 20:16:07 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:07.537170 | orchestrator | 2025-06-02 20:16:07 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:07.537691 | orchestrator | 2025-06-02 20:16:07 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:07.538977 | orchestrator | 2025-06-02 20:16:07 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:07.539051 | orchestrator | 2025-06-02 20:16:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:10.588036 | orchestrator | 2025-06-02 20:16:10 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:10.590393 | orchestrator | 2025-06-02 20:16:10 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:10.591854 | orchestrator | 2025-06-02 20:16:10 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:10.594382 | orchestrator | 2025-06-02 20:16:10 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:10.594513 | orchestrator | 2025-06-02 20:16:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:13.645233 | orchestrator | 2025-06-02 20:16:13 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:13.646950 | orchestrator | 2025-06-02 20:16:13 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:13.649172 | orchestrator | 2025-06-02 20:16:13 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:13.650069 | orchestrator | 2025-06-02 20:16:13 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:13.650112 | orchestrator | 2025-06-02 20:16:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:16.693345 | orchestrator | 2025-06-02 20:16:16 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:16.694704 | orchestrator | 2025-06-02 20:16:16 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:16.695466 | orchestrator | 2025-06-02 20:16:16 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:16.697750 | orchestrator | 2025-06-02 20:16:16 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:16.697802 | orchestrator | 2025-06-02 20:16:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:19.734904 | orchestrator | 2025-06-02 20:16:19 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:19.735532 | orchestrator | 2025-06-02 20:16:19 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:19.736681 | orchestrator | 2025-06-02 20:16:19 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:19.738224 | orchestrator | 2025-06-02 20:16:19 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:19.738554 | orchestrator | 2025-06-02 20:16:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:22.783108 | orchestrator | 2025-06-02 20:16:22 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:22.786241 | orchestrator | 2025-06-02 20:16:22 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:22.786320 | orchestrator | 2025-06-02 20:16:22 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:22.786332 | orchestrator | 2025-06-02 20:16:22 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:22.786341 | orchestrator | 2025-06-02 20:16:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:25.829650 | orchestrator | 2025-06-02 20:16:25 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:25.832172 | orchestrator | 2025-06-02 20:16:25 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:25.834180 | orchestrator | 2025-06-02 20:16:25 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:25.836147 | orchestrator | 2025-06-02 20:16:25 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:25.836209 | orchestrator | 2025-06-02 20:16:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:28.878967 | orchestrator | 2025-06-02 20:16:28 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:28.879986 | orchestrator | 2025-06-02 20:16:28 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:28.880855 | orchestrator | 2025-06-02 20:16:28 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:28.881429 | orchestrator | 2025-06-02 20:16:28 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:28.881469 | orchestrator | 2025-06-02 20:16:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:31.926344 | orchestrator | 2025-06-02 20:16:31 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:31.926975 | orchestrator | 2025-06-02 20:16:31 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:31.928768 | orchestrator | 2025-06-02 20:16:31 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:31.929607 | orchestrator | 2025-06-02 20:16:31 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:31.930129 | orchestrator | 2025-06-02 20:16:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:34.965214 | orchestrator | 2025-06-02 20:16:34 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:34.966778 | orchestrator | 2025-06-02 20:16:34 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:34.967078 | orchestrator | 2025-06-02 20:16:34 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:34.967811 | orchestrator | 2025-06-02 20:16:34 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:34.967929 | orchestrator | 2025-06-02 20:16:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:38.001206 | orchestrator | 2025-06-02 20:16:37 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:38.002672 | orchestrator | 2025-06-02 20:16:37 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:38.002753 | orchestrator | 2025-06-02 20:16:38 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:38.002769 | orchestrator | 2025-06-02 20:16:38 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:38.002782 | orchestrator | 2025-06-02 20:16:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:41.050760 | orchestrator | 2025-06-02 20:16:41 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:41.052437 | orchestrator | 2025-06-02 20:16:41 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:41.052979 | orchestrator | 2025-06-02 20:16:41 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:41.053686 | orchestrator | 2025-06-02 20:16:41 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:41.053736 | orchestrator | 2025-06-02 20:16:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:44.081039 | orchestrator | 2025-06-02 20:16:44 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:44.081125 | orchestrator | 2025-06-02 20:16:44 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:44.081609 | orchestrator | 2025-06-02 20:16:44 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:44.083353 | orchestrator | 2025-06-02 20:16:44 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:44.083408 | orchestrator | 2025-06-02 20:16:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:47.117153 | orchestrator | 2025-06-02 20:16:47 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:47.117242 | orchestrator | 2025-06-02 20:16:47 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:47.117633 | orchestrator | 2025-06-02 20:16:47 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:47.118474 | orchestrator | 2025-06-02 20:16:47 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:47.118551 | orchestrator | 2025-06-02 20:16:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:50.145479 | orchestrator | 2025-06-02 20:16:50 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:50.145853 | orchestrator | 2025-06-02 20:16:50 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:50.147093 | orchestrator | 2025-06-02 20:16:50 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:50.147743 | orchestrator | 2025-06-02 20:16:50 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:50.147781 | orchestrator | 2025-06-02 20:16:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:53.171699 | orchestrator | 2025-06-02 20:16:53 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:53.172997 | orchestrator | 2025-06-02 20:16:53 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:53.173089 | orchestrator | 2025-06-02 20:16:53 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:53.173104 | orchestrator | 2025-06-02 20:16:53 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:53.173113 | orchestrator | 2025-06-02 20:16:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:56.208705 | orchestrator | 2025-06-02 20:16:56 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:56.208978 | orchestrator | 2025-06-02 20:16:56 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state STARTED 2025-06-02 20:16:56.209655 | orchestrator | 2025-06-02 20:16:56 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:56.210219 | orchestrator | 2025-06-02 20:16:56 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:56.210258 | orchestrator | 2025-06-02 20:16:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:59.234246 | orchestrator | 2025-06-02 20:16:59 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:16:59.237698 | orchestrator | 2025-06-02 20:16:59 | INFO  | Task c6b948d2-f282-49c7-90c2-79278fdc5084 is in state SUCCESS 2025-06-02 20:16:59.238761 | orchestrator | 2025-06-02 20:16:59.238909 | orchestrator | 2025-06-02 20:16:59.238955 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:16:59.238970 | orchestrator | 2025-06-02 20:16:59.238982 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:16:59.238994 | orchestrator | Monday 02 June 2025 20:12:59 +0000 (0:00:00.192) 0:00:00.192 *********** 2025-06-02 20:16:59.239005 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:16:59.239018 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:16:59.239029 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:16:59.239041 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:16:59.239052 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:16:59.239063 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:16:59.239074 | orchestrator | 2025-06-02 20:16:59.239086 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:16:59.239098 | orchestrator | Monday 02 June 2025 20:12:59 +0000 (0:00:00.490) 0:00:00.682 *********** 2025-06-02 20:16:59.239110 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-02 20:16:59.239141 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-02 20:16:59.239154 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-02 20:16:59.239166 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-02 20:16:59.239177 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-02 20:16:59.239188 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-02 20:16:59.239199 | orchestrator | 2025-06-02 20:16:59.239210 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-02 20:16:59.239221 | orchestrator | 2025-06-02 20:16:59.239233 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 20:16:59.239244 | orchestrator | Monday 02 June 2025 20:13:00 +0000 (0:00:00.520) 0:00:01.203 *********** 2025-06-02 20:16:59.239269 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:16:59.239284 | orchestrator | 2025-06-02 20:16:59.239296 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-02 20:16:59.239307 | orchestrator | Monday 02 June 2025 20:13:01 +0000 (0:00:01.003) 0:00:02.206 *********** 2025-06-02 20:16:59.239319 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-02 20:16:59.239330 | orchestrator | 2025-06-02 20:16:59.239341 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-02 20:16:59.239353 | orchestrator | Monday 02 June 2025 20:13:04 +0000 (0:00:03.550) 0:00:05.756 *********** 2025-06-02 20:16:59.239365 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-02 20:16:59.239379 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-02 20:16:59.239417 | orchestrator | 2025-06-02 20:16:59.239430 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-02 20:16:59.239444 | orchestrator | Monday 02 June 2025 20:13:11 +0000 (0:00:06.688) 0:00:12.444 *********** 2025-06-02 20:16:59.239458 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:16:59.239471 | orchestrator | 2025-06-02 20:16:59.239485 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-02 20:16:59.239497 | orchestrator | Monday 02 June 2025 20:13:15 +0000 (0:00:03.474) 0:00:15.918 *********** 2025-06-02 20:16:59.239508 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:16:59.239519 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-02 20:16:59.239531 | orchestrator | 2025-06-02 20:16:59.239543 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-02 20:16:59.239554 | orchestrator | Monday 02 June 2025 20:13:19 +0000 (0:00:04.059) 0:00:19.978 *********** 2025-06-02 20:16:59.239566 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:16:59.239588 | orchestrator | 2025-06-02 20:16:59.239600 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-02 20:16:59.239610 | orchestrator | Monday 02 June 2025 20:13:22 +0000 (0:00:03.243) 0:00:23.222 *********** 2025-06-02 20:16:59.239620 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-02 20:16:59.239632 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-02 20:16:59.239642 | orchestrator | 2025-06-02 20:16:59.239652 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-02 20:16:59.239663 | orchestrator | Monday 02 June 2025 20:13:30 +0000 (0:00:08.326) 0:00:31.548 *********** 2025-06-02 20:16:59.239677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.239718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.239739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.239751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.239764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.239783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.239803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.239819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.239831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.239843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.239861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.239874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.239886 | orchestrator | 2025-06-02 20:16:59.239902 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 20:16:59.239913 | orchestrator | Monday 02 June 2025 20:13:33 +0000 (0:00:02.958) 0:00:34.507 *********** 2025-06-02 20:16:59.239924 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.239935 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:16:59.239945 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:16:59.239957 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:16:59.239968 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:16:59.239978 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:16:59.239989 | orchestrator | 2025-06-02 20:16:59.240001 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 20:16:59.240012 | orchestrator | Monday 02 June 2025 20:13:34 +0000 (0:00:00.655) 0:00:35.163 *********** 2025-06-02 20:16:59.240023 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.240033 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:16:59.240044 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:16:59.240055 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:16:59.240065 | orchestrator | 2025-06-02 20:16:59.240076 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-02 20:16:59.240087 | orchestrator | Monday 02 June 2025 20:13:35 +0000 (0:00:00.792) 0:00:35.955 *********** 2025-06-02 20:16:59.240097 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-02 20:16:59.240109 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-02 20:16:59.240119 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-02 20:16:59.240130 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-02 20:16:59.240141 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-02 20:16:59.240152 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-02 20:16:59.240163 | orchestrator | 2025-06-02 20:16:59.240178 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-02 20:16:59.240198 | orchestrator | Monday 02 June 2025 20:13:37 +0000 (0:00:02.326) 0:00:38.281 *********** 2025-06-02 20:16:59.240211 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:16:59.240225 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:16:59.240237 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:16:59.240258 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:16:59.240276 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:16:59.240296 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:16:59.240308 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:16:59.240320 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:16:59.240338 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:16:59.240356 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:16:59.240376 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:16:59.240412 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:16:59.240425 | orchestrator | 2025-06-02 20:16:59.240436 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-02 20:16:59.240446 | orchestrator | Monday 02 June 2025 20:13:41 +0000 (0:00:04.048) 0:00:42.330 *********** 2025-06-02 20:16:59.240458 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:16:59.240470 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:16:59.240480 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:16:59.240491 | orchestrator | 2025-06-02 20:16:59.240502 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-02 20:16:59.240513 | orchestrator | Monday 02 June 2025 20:13:44 +0000 (0:00:02.606) 0:00:44.937 *********** 2025-06-02 20:16:59.240524 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-02 20:16:59.240535 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-02 20:16:59.240545 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-02 20:16:59.240556 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:16:59.240568 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:16:59.240585 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:16:59.240596 | orchestrator | 2025-06-02 20:16:59.240607 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-02 20:16:59.240618 | orchestrator | Monday 02 June 2025 20:13:47 +0000 (0:00:03.124) 0:00:48.061 *********** 2025-06-02 20:16:59.240630 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-02 20:16:59.240641 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-02 20:16:59.240651 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-02 20:16:59.240662 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-02 20:16:59.240680 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-02 20:16:59.240690 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-02 20:16:59.240701 | orchestrator | 2025-06-02 20:16:59.240712 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-02 20:16:59.240723 | orchestrator | Monday 02 June 2025 20:13:48 +0000 (0:00:01.063) 0:00:49.125 *********** 2025-06-02 20:16:59.240733 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.240744 | orchestrator | 2025-06-02 20:16:59.240755 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-02 20:16:59.240766 | orchestrator | Monday 02 June 2025 20:13:48 +0000 (0:00:00.235) 0:00:49.360 *********** 2025-06-02 20:16:59.240776 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.240788 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:16:59.240800 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:16:59.240810 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:16:59.240821 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:16:59.240831 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:16:59.240841 | orchestrator | 2025-06-02 20:16:59.240852 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 20:16:59.240863 | orchestrator | Monday 02 June 2025 20:13:49 +0000 (0:00:01.130) 0:00:50.491 *********** 2025-06-02 20:16:59.240886 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:16:59.240898 | orchestrator | 2025-06-02 20:16:59.240909 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-02 20:16:59.240920 | orchestrator | Monday 02 June 2025 20:13:51 +0000 (0:00:01.977) 0:00:52.468 *********** 2025-06-02 20:16:59.240930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.240942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.240961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.240980 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.240998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.241012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.241025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.241037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.241320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.241345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.241367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.241381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.241414 | orchestrator | 2025-06-02 20:16:59.241426 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-02 20:16:59.241437 | orchestrator | Monday 02 June 2025 20:13:55 +0000 (0:00:04.033) 0:00:56.502 *********** 2025-06-02 20:16:59.241449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:16:59.241483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:16:59.241495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.241511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.241523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:16:59.241534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.241544 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.241555 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:16:59.241566 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:16:59.241577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.241600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.241612 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:16:59.241628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.241640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.241652 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:16:59.241663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.241681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.241693 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:16:59.242234 | orchestrator | 2025-06-02 20:16:59.242264 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-02 20:16:59.242275 | orchestrator | Monday 02 June 2025 20:13:57 +0000 (0:00:02.064) 0:00:58.567 *********** 2025-06-02 20:16:59.242297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:16:59.242319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.242331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:16:59.242342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.242363 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.242375 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:16:59.242416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:16:59.242438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.242449 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:16:59.242461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.242477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.242488 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:16:59.242498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.242516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.242550 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:16:59.242569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.242580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.242589 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:16:59.242599 | orchestrator | 2025-06-02 20:16:59.242609 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-02 20:16:59.242619 | orchestrator | Monday 02 June 2025 20:13:59 +0000 (0:00:02.145) 0:01:00.712 *********** 2025-06-02 20:16:59.242634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.242644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.242666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.242682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.242696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.242708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.242719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.242738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.242755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.242766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.242775 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.242791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.242810 | orchestrator | 2025-06-02 20:16:59.242821 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-02 20:16:59.242832 | orchestrator | Monday 02 June 2025 20:14:03 +0000 (0:00:04.037) 0:01:04.750 *********** 2025-06-02 20:16:59.242842 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 20:16:59.242853 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:16:59.242863 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 20:16:59.242873 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 20:16:59.242883 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 20:16:59.242892 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:16:59.242902 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 20:16:59.242912 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:16:59.242922 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 20:16:59.242931 | orchestrator | 2025-06-02 20:16:59.242941 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-02 20:16:59.242951 | orchestrator | Monday 02 June 2025 20:14:07 +0000 (0:00:03.782) 0:01:08.533 *********** 2025-06-02 20:16:59.242961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.242981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.242998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.243028 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243069 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243112 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243136 | orchestrator | 2025-06-02 20:16:59.243148 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-02 20:16:59.243159 | orchestrator | Monday 02 June 2025 20:14:18 +0000 (0:00:10.744) 0:01:19.277 *********** 2025-06-02 20:16:59.243174 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.243184 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:16:59.243194 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:16:59.243203 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:16:59.243213 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:16:59.243223 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:16:59.243232 | orchestrator | 2025-06-02 20:16:59.243243 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-02 20:16:59.243252 | orchestrator | Monday 02 June 2025 20:14:21 +0000 (0:00:02.839) 0:01:22.117 *********** 2025-06-02 20:16:59.243263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:16:59.243289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.243299 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.243309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:16:59.243321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.243337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.243348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:16:59.243368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.243375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.243381 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:16:59.243418 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:16:59.243429 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:16:59.243439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.243450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.243462 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:16:59.243481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.243504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:16:59.243516 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:16:59.243527 | orchestrator | 2025-06-02 20:16:59.243538 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-02 20:16:59.243549 | orchestrator | Monday 02 June 2025 20:14:22 +0000 (0:00:01.326) 0:01:23.443 *********** 2025-06-02 20:16:59.243559 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.243572 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:16:59.243579 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:16:59.243585 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:16:59.243591 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:16:59.243597 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:16:59.243604 | orchestrator | 2025-06-02 20:16:59.243610 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-02 20:16:59.243616 | orchestrator | Monday 02 June 2025 20:14:23 +0000 (0:00:00.663) 0:01:24.106 *********** 2025-06-02 20:16:59.243623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.243629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.243642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:16:59.243669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243679 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243734 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243740 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:16:59.243747 | orchestrator | 2025-06-02 20:16:59.243753 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 20:16:59.243759 | orchestrator | Monday 02 June 2025 20:14:25 +0000 (0:00:02.306) 0:01:26.413 *********** 2025-06-02 20:16:59.243766 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.243772 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:16:59.243778 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:16:59.243784 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:16:59.243790 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:16:59.243796 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:16:59.243807 | orchestrator | 2025-06-02 20:16:59.243813 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-02 20:16:59.243819 | orchestrator | Monday 02 June 2025 20:14:26 +0000 (0:00:00.792) 0:01:27.206 *********** 2025-06-02 20:16:59.243825 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:16:59.243832 | orchestrator | 2025-06-02 20:16:59.243838 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-02 20:16:59.243844 | orchestrator | Monday 02 June 2025 20:14:28 +0000 (0:00:02.190) 0:01:29.396 *********** 2025-06-02 20:16:59.243850 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:16:59.243856 | orchestrator | 2025-06-02 20:16:59.243862 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-02 20:16:59.243869 | orchestrator | Monday 02 June 2025 20:14:31 +0000 (0:00:02.454) 0:01:31.851 *********** 2025-06-02 20:16:59.243875 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:16:59.243881 | orchestrator | 2025-06-02 20:16:59.243887 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:16:59.243893 | orchestrator | Monday 02 June 2025 20:14:52 +0000 (0:00:21.062) 0:01:52.914 *********** 2025-06-02 20:16:59.243899 | orchestrator | 2025-06-02 20:16:59.243910 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:16:59.243918 | orchestrator | Monday 02 June 2025 20:14:52 +0000 (0:00:00.091) 0:01:53.006 *********** 2025-06-02 20:16:59.243929 | orchestrator | 2025-06-02 20:16:59.243938 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:16:59.243949 | orchestrator | Monday 02 June 2025 20:14:52 +0000 (0:00:00.077) 0:01:53.083 *********** 2025-06-02 20:16:59.243960 | orchestrator | 2025-06-02 20:16:59.243972 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:16:59.243984 | orchestrator | Monday 02 June 2025 20:14:52 +0000 (0:00:00.089) 0:01:53.173 *********** 2025-06-02 20:16:59.243994 | orchestrator | 2025-06-02 20:16:59.244003 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:16:59.244012 | orchestrator | Monday 02 June 2025 20:14:52 +0000 (0:00:00.090) 0:01:53.264 *********** 2025-06-02 20:16:59.244019 | orchestrator | 2025-06-02 20:16:59.244025 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:16:59.244030 | orchestrator | Monday 02 June 2025 20:14:52 +0000 (0:00:00.083) 0:01:53.348 *********** 2025-06-02 20:16:59.244035 | orchestrator | 2025-06-02 20:16:59.244041 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-02 20:16:59.244046 | orchestrator | Monday 02 June 2025 20:14:52 +0000 (0:00:00.062) 0:01:53.410 *********** 2025-06-02 20:16:59.244052 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:16:59.244058 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:16:59.244063 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:16:59.244068 | orchestrator | 2025-06-02 20:16:59.244074 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-02 20:16:59.244079 | orchestrator | Monday 02 June 2025 20:15:17 +0000 (0:00:24.627) 0:02:18.038 *********** 2025-06-02 20:16:59.244085 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:16:59.244093 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:16:59.244099 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:16:59.244105 | orchestrator | 2025-06-02 20:16:59.244121 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-02 20:16:59.244126 | orchestrator | Monday 02 June 2025 20:15:23 +0000 (0:00:05.870) 0:02:23.909 *********** 2025-06-02 20:16:59.244132 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:16:59.244137 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:16:59.244149 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:16:59.244155 | orchestrator | 2025-06-02 20:16:59.244160 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-02 20:16:59.244166 | orchestrator | Monday 02 June 2025 20:16:47 +0000 (0:01:24.798) 0:03:48.708 *********** 2025-06-02 20:16:59.244171 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:16:59.244182 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:16:59.244187 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:16:59.244193 | orchestrator | 2025-06-02 20:16:59.244198 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-02 20:16:59.244204 | orchestrator | Monday 02 June 2025 20:16:54 +0000 (0:00:07.026) 0:03:55.734 *********** 2025-06-02 20:16:59.244209 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:16:59.244219 | orchestrator | 2025-06-02 20:16:59.244224 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:16:59.244230 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:16:59.244333 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 20:16:59.244343 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 20:16:59.244348 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 20:16:59.244354 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 20:16:59.244359 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 20:16:59.244365 | orchestrator | 2025-06-02 20:16:59.244370 | orchestrator | 2025-06-02 20:16:59.244376 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:16:59.244381 | orchestrator | Monday 02 June 2025 20:16:55 +0000 (0:00:00.587) 0:03:56.322 *********** 2025-06-02 20:16:59.244437 | orchestrator | =============================================================================== 2025-06-02 20:16:59.244444 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 84.80s 2025-06-02 20:16:59.244449 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.63s 2025-06-02 20:16:59.244455 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.06s 2025-06-02 20:16:59.244460 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.74s 2025-06-02 20:16:59.244466 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.33s 2025-06-02 20:16:59.244471 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 7.03s 2025-06-02 20:16:59.244477 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.69s 2025-06-02 20:16:59.244483 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.87s 2025-06-02 20:16:59.244496 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.06s 2025-06-02 20:16:59.244502 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.05s 2025-06-02 20:16:59.244507 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.04s 2025-06-02 20:16:59.244513 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.03s 2025-06-02 20:16:59.244518 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.78s 2025-06-02 20:16:59.244523 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.55s 2025-06-02 20:16:59.244529 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.47s 2025-06-02 20:16:59.244534 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.24s 2025-06-02 20:16:59.244539 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.12s 2025-06-02 20:16:59.244545 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.96s 2025-06-02 20:16:59.244557 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.84s 2025-06-02 20:16:59.244562 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.61s 2025-06-02 20:16:59.244568 | orchestrator | 2025-06-02 20:16:59 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:16:59.244573 | orchestrator | 2025-06-02 20:16:59 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:16:59.244579 | orchestrator | 2025-06-02 20:16:59 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:16:59.244589 | orchestrator | 2025-06-02 20:16:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:02.277093 | orchestrator | 2025-06-02 20:17:02 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:02.277178 | orchestrator | 2025-06-02 20:17:02 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:02.277739 | orchestrator | 2025-06-02 20:17:02 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:02.278368 | orchestrator | 2025-06-02 20:17:02 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:02.278440 | orchestrator | 2025-06-02 20:17:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:05.306246 | orchestrator | 2025-06-02 20:17:05 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:05.307074 | orchestrator | 2025-06-02 20:17:05 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:05.309101 | orchestrator | 2025-06-02 20:17:05 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:05.309413 | orchestrator | 2025-06-02 20:17:05 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:05.310868 | orchestrator | 2025-06-02 20:17:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:08.339823 | orchestrator | 2025-06-02 20:17:08 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:08.340996 | orchestrator | 2025-06-02 20:17:08 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:08.341047 | orchestrator | 2025-06-02 20:17:08 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:08.341786 | orchestrator | 2025-06-02 20:17:08 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:08.341813 | orchestrator | 2025-06-02 20:17:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:11.373195 | orchestrator | 2025-06-02 20:17:11 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:11.373969 | orchestrator | 2025-06-02 20:17:11 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:11.375124 | orchestrator | 2025-06-02 20:17:11 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:11.376294 | orchestrator | 2025-06-02 20:17:11 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:11.376353 | orchestrator | 2025-06-02 20:17:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:14.411944 | orchestrator | 2025-06-02 20:17:14 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:14.412047 | orchestrator | 2025-06-02 20:17:14 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:14.412060 | orchestrator | 2025-06-02 20:17:14 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:14.413619 | orchestrator | 2025-06-02 20:17:14 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:14.414319 | orchestrator | 2025-06-02 20:17:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:17.456970 | orchestrator | 2025-06-02 20:17:17 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:17.458260 | orchestrator | 2025-06-02 20:17:17 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:17.458837 | orchestrator | 2025-06-02 20:17:17 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:17.459872 | orchestrator | 2025-06-02 20:17:17 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:17.459963 | orchestrator | 2025-06-02 20:17:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:20.500162 | orchestrator | 2025-06-02 20:17:20 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:20.502050 | orchestrator | 2025-06-02 20:17:20 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:20.504246 | orchestrator | 2025-06-02 20:17:20 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:20.506989 | orchestrator | 2025-06-02 20:17:20 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:20.510072 | orchestrator | 2025-06-02 20:17:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:23.550086 | orchestrator | 2025-06-02 20:17:23 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:23.550480 | orchestrator | 2025-06-02 20:17:23 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:23.553023 | orchestrator | 2025-06-02 20:17:23 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:23.554661 | orchestrator | 2025-06-02 20:17:23 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:23.554715 | orchestrator | 2025-06-02 20:17:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:26.598097 | orchestrator | 2025-06-02 20:17:26 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:26.598731 | orchestrator | 2025-06-02 20:17:26 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:26.599252 | orchestrator | 2025-06-02 20:17:26 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:26.600075 | orchestrator | 2025-06-02 20:17:26 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:26.600114 | orchestrator | 2025-06-02 20:17:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:29.626202 | orchestrator | 2025-06-02 20:17:29 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:29.626326 | orchestrator | 2025-06-02 20:17:29 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:29.626993 | orchestrator | 2025-06-02 20:17:29 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:29.627902 | orchestrator | 2025-06-02 20:17:29 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:29.627921 | orchestrator | 2025-06-02 20:17:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:32.671960 | orchestrator | 2025-06-02 20:17:32 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:32.672077 | orchestrator | 2025-06-02 20:17:32 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:32.672868 | orchestrator | 2025-06-02 20:17:32 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state STARTED 2025-06-02 20:17:32.673349 | orchestrator | 2025-06-02 20:17:32 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:32.673401 | orchestrator | 2025-06-02 20:17:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:35.717803 | orchestrator | 2025-06-02 20:17:35 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:35.718214 | orchestrator | 2025-06-02 20:17:35 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:35.719817 | orchestrator | 2025-06-02 20:17:35 | INFO  | Task afb2f45f-835e-4242-808a-e11dfff28d56 is in state SUCCESS 2025-06-02 20:17:35.721016 | orchestrator | 2025-06-02 20:17:35.721044 | orchestrator | 2025-06-02 20:17:35.721049 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:17:35.721055 | orchestrator | 2025-06-02 20:17:35.721059 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:17:35.721064 | orchestrator | Monday 02 June 2025 20:15:40 +0000 (0:00:00.238) 0:00:00.238 *********** 2025-06-02 20:17:35.721069 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:17:35.721075 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:17:35.721159 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:17:35.721168 | orchestrator | 2025-06-02 20:17:35.721174 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:17:35.721181 | orchestrator | Monday 02 June 2025 20:15:40 +0000 (0:00:00.254) 0:00:00.493 *********** 2025-06-02 20:17:35.721188 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-02 20:17:35.721196 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-02 20:17:35.721201 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-02 20:17:35.721205 | orchestrator | 2025-06-02 20:17:35.721209 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-02 20:17:35.721213 | orchestrator | 2025-06-02 20:17:35.721217 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 20:17:35.721222 | orchestrator | Monday 02 June 2025 20:15:41 +0000 (0:00:00.343) 0:00:00.836 *********** 2025-06-02 20:17:35.721226 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:17:35.721231 | orchestrator | 2025-06-02 20:17:35.721235 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-02 20:17:35.721239 | orchestrator | Monday 02 June 2025 20:15:41 +0000 (0:00:00.492) 0:00:01.328 *********** 2025-06-02 20:17:35.721244 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-02 20:17:35.721248 | orchestrator | 2025-06-02 20:17:35.721252 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-02 20:17:35.721310 | orchestrator | Monday 02 June 2025 20:15:44 +0000 (0:00:03.279) 0:00:04.608 *********** 2025-06-02 20:17:35.721317 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-02 20:17:35.721321 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-02 20:17:35.721325 | orchestrator | 2025-06-02 20:17:35.721329 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-02 20:17:35.721335 | orchestrator | Monday 02 June 2025 20:15:51 +0000 (0:00:06.288) 0:00:10.896 *********** 2025-06-02 20:17:35.721341 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:17:35.721507 | orchestrator | 2025-06-02 20:17:35.721517 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-02 20:17:35.721521 | orchestrator | Monday 02 June 2025 20:15:54 +0000 (0:00:03.237) 0:00:14.133 *********** 2025-06-02 20:17:35.721542 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:17:35.721546 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-02 20:17:35.721550 | orchestrator | 2025-06-02 20:17:35.721554 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-02 20:17:35.721558 | orchestrator | Monday 02 June 2025 20:15:58 +0000 (0:00:03.836) 0:00:17.970 *********** 2025-06-02 20:17:35.721562 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:17:35.721566 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-02 20:17:35.721570 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-02 20:17:35.721574 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-02 20:17:35.721578 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-02 20:17:35.721582 | orchestrator | 2025-06-02 20:17:35.721586 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-02 20:17:35.721590 | orchestrator | Monday 02 June 2025 20:16:13 +0000 (0:00:15.300) 0:00:33.270 *********** 2025-06-02 20:17:35.721594 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-02 20:17:35.721598 | orchestrator | 2025-06-02 20:17:35.721602 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-02 20:17:35.721606 | orchestrator | Monday 02 June 2025 20:16:18 +0000 (0:00:04.828) 0:00:38.099 *********** 2025-06-02 20:17:35.721613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.721628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.721638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.721647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721684 | orchestrator | 2025-06-02 20:17:35.721688 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-02 20:17:35.721695 | orchestrator | Monday 02 June 2025 20:16:20 +0000 (0:00:01.811) 0:00:39.910 *********** 2025-06-02 20:17:35.721699 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-02 20:17:35.721703 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-02 20:17:35.721707 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-02 20:17:35.721711 | orchestrator | 2025-06-02 20:17:35.721715 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-02 20:17:35.721718 | orchestrator | Monday 02 June 2025 20:16:21 +0000 (0:00:01.251) 0:00:41.162 *********** 2025-06-02 20:17:35.721722 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:35.721726 | orchestrator | 2025-06-02 20:17:35.721730 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-02 20:17:35.721734 | orchestrator | Monday 02 June 2025 20:16:21 +0000 (0:00:00.129) 0:00:41.291 *********** 2025-06-02 20:17:35.721738 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:35.721742 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:35.721746 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:35.721750 | orchestrator | 2025-06-02 20:17:35.721754 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 20:17:35.721758 | orchestrator | Monday 02 June 2025 20:16:22 +0000 (0:00:00.500) 0:00:41.792 *********** 2025-06-02 20:17:35.721762 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:17:35.721766 | orchestrator | 2025-06-02 20:17:35.721770 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-02 20:17:35.721774 | orchestrator | Monday 02 June 2025 20:16:22 +0000 (0:00:00.678) 0:00:42.470 *********** 2025-06-02 20:17:35.721778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.721787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.721791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.721801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.721835 | orchestrator | 2025-06-02 20:17:35.721840 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-02 20:17:35.721844 | orchestrator | Monday 02 June 2025 20:16:27 +0000 (0:00:04.325) 0:00:46.795 *********** 2025-06-02 20:17:35.721850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:17:35.721855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721863 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:35.721871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:17:35.721878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721887 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:35.721893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:17:35.721898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721906 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:35.721910 | orchestrator | 2025-06-02 20:17:35.721914 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-02 20:17:35.721918 | orchestrator | Monday 02 June 2025 20:16:28 +0000 (0:00:01.150) 0:00:47.946 *********** 2025-06-02 20:17:35.721926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:17:35.721934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721945 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:35.721949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:17:35.721953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721968 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:35.721976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:17:35.721983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.721991 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:35.721995 | orchestrator | 2025-06-02 20:17:35.721999 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-02 20:17:35.722003 | orchestrator | Monday 02 June 2025 20:16:29 +0000 (0:00:01.347) 0:00:49.294 *********** 2025-06-02 20:17:35.722007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.722047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.722060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.722071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722108 | orchestrator | 2025-06-02 20:17:35.722113 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-02 20:17:35.722117 | orchestrator | Monday 02 June 2025 20:16:33 +0000 (0:00:03.985) 0:00:53.279 *********** 2025-06-02 20:17:35.722121 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:35.722125 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:17:35.722128 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:17:35.722132 | orchestrator | 2025-06-02 20:17:35.722137 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-02 20:17:35.722142 | orchestrator | Monday 02 June 2025 20:16:36 +0000 (0:00:02.627) 0:00:55.906 *********** 2025-06-02 20:17:35.722146 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:17:35.722151 | orchestrator | 2025-06-02 20:17:35.722155 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-02 20:17:35.722160 | orchestrator | Monday 02 June 2025 20:16:37 +0000 (0:00:01.478) 0:00:57.385 *********** 2025-06-02 20:17:35.722164 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:35.722169 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:35.722173 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:35.722178 | orchestrator | 2025-06-02 20:17:35.722185 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-02 20:17:35.722190 | orchestrator | Monday 02 June 2025 20:16:38 +0000 (0:00:00.792) 0:00:58.177 *********** 2025-06-02 20:17:35.722194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.722199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.722211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.722216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722251 | orchestrator | 2025-06-02 20:17:35.722255 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-02 20:17:35.722259 | orchestrator | Monday 02 June 2025 20:16:46 +0000 (0:00:08.399) 0:01:06.576 *********** 2025-06-02 20:17:35.722266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:17:35.722273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.722277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.722281 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:35.722289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:17:35.722293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.722299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.722304 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:35.722308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:17:35.722315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.722319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:17:35.722327 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:35.722331 | orchestrator | 2025-06-02 20:17:35.722335 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-02 20:17:35.722339 | orchestrator | Monday 02 June 2025 20:16:48 +0000 (0:00:01.913) 0:01:08.490 *********** 2025-06-02 20:17:35.722343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.722350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.722392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:35.722434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:35.722461 | orchestrator | 2025-06-02 20:17:35.722465 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 20:17:35.722469 | orchestrator | Monday 02 June 2025 20:16:52 +0000 (0:00:03.642) 0:01:12.132 *********** 2025-06-02 20:17:35.722473 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:35.722477 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:35.722481 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:35.722484 | orchestrator | 2025-06-02 20:17:35.722488 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-02 20:17:35.722492 | orchestrator | Monday 02 June 2025 20:16:52 +0000 (0:00:00.313) 0:01:12.446 *********** 2025-06-02 20:17:35.722503 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:35.722507 | orchestrator | 2025-06-02 20:17:35.722511 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-02 20:17:35.722515 | orchestrator | Monday 02 June 2025 20:16:54 +0000 (0:00:02.085) 0:01:14.532 *********** 2025-06-02 20:17:35.722519 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:35.722523 | orchestrator | 2025-06-02 20:17:35.722527 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-02 20:17:35.722531 | orchestrator | Monday 02 June 2025 20:16:57 +0000 (0:00:02.320) 0:01:16.852 *********** 2025-06-02 20:17:35.722535 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:35.722539 | orchestrator | 2025-06-02 20:17:35.722542 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 20:17:35.722546 | orchestrator | Monday 02 June 2025 20:17:08 +0000 (0:00:11.649) 0:01:28.501 *********** 2025-06-02 20:17:35.722550 | orchestrator | 2025-06-02 20:17:35.722554 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 20:17:35.722558 | orchestrator | Monday 02 June 2025 20:17:08 +0000 (0:00:00.060) 0:01:28.562 *********** 2025-06-02 20:17:35.722562 | orchestrator | 2025-06-02 20:17:35.722566 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 20:17:35.722570 | orchestrator | Monday 02 June 2025 20:17:08 +0000 (0:00:00.059) 0:01:28.621 *********** 2025-06-02 20:17:35.722573 | orchestrator | 2025-06-02 20:17:35.722577 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-02 20:17:35.722581 | orchestrator | Monday 02 June 2025 20:17:08 +0000 (0:00:00.062) 0:01:28.684 *********** 2025-06-02 20:17:35.722585 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:35.722589 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:17:35.722593 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:17:35.722597 | orchestrator | 2025-06-02 20:17:35.722601 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-02 20:17:35.722605 | orchestrator | Monday 02 June 2025 20:17:16 +0000 (0:00:08.058) 0:01:36.742 *********** 2025-06-02 20:17:35.722608 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:35.722612 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:17:35.722616 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:17:35.722620 | orchestrator | 2025-06-02 20:17:35.722624 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-02 20:17:35.722628 | orchestrator | Monday 02 June 2025 20:17:23 +0000 (0:00:06.668) 0:01:43.411 *********** 2025-06-02 20:17:35.722632 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:35.722636 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:17:35.722639 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:17:35.722643 | orchestrator | 2025-06-02 20:17:35.722647 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:17:35.722652 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:17:35.722658 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:17:35.722662 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:17:35.722666 | orchestrator | 2025-06-02 20:17:35.722670 | orchestrator | 2025-06-02 20:17:35.722674 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:17:35.722678 | orchestrator | Monday 02 June 2025 20:17:34 +0000 (0:00:11.080) 0:01:54.491 *********** 2025-06-02 20:17:35.722681 | orchestrator | =============================================================================== 2025-06-02 20:17:35.722685 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.30s 2025-06-02 20:17:35.722692 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.65s 2025-06-02 20:17:35.722699 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.08s 2025-06-02 20:17:35.722703 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.40s 2025-06-02 20:17:35.722707 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.06s 2025-06-02 20:17:35.722711 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.67s 2025-06-02 20:17:35.722715 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.29s 2025-06-02 20:17:35.722719 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.83s 2025-06-02 20:17:35.722723 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.33s 2025-06-02 20:17:35.722727 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.99s 2025-06-02 20:17:35.722731 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.84s 2025-06-02 20:17:35.722734 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.64s 2025-06-02 20:17:35.722738 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.28s 2025-06-02 20:17:35.722742 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.24s 2025-06-02 20:17:35.722746 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.63s 2025-06-02 20:17:35.722750 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.32s 2025-06-02 20:17:35.722754 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.09s 2025-06-02 20:17:35.722758 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.91s 2025-06-02 20:17:35.722761 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.81s 2025-06-02 20:17:35.722768 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.48s 2025-06-02 20:17:35.722772 | orchestrator | 2025-06-02 20:17:35 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:35.722776 | orchestrator | 2025-06-02 20:17:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:38.745929 | orchestrator | 2025-06-02 20:17:38 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:38.747772 | orchestrator | 2025-06-02 20:17:38 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:17:38.748345 | orchestrator | 2025-06-02 20:17:38 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:38.748884 | orchestrator | 2025-06-02 20:17:38 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:38.748917 | orchestrator | 2025-06-02 20:17:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:41.779911 | orchestrator | 2025-06-02 20:17:41 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:41.780014 | orchestrator | 2025-06-02 20:17:41 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:17:41.780386 | orchestrator | 2025-06-02 20:17:41 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:41.780660 | orchestrator | 2025-06-02 20:17:41 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:41.780767 | orchestrator | 2025-06-02 20:17:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:44.803876 | orchestrator | 2025-06-02 20:17:44 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:44.803985 | orchestrator | 2025-06-02 20:17:44 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:17:44.804642 | orchestrator | 2025-06-02 20:17:44 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:44.808167 | orchestrator | 2025-06-02 20:17:44 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:44.808234 | orchestrator | 2025-06-02 20:17:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:47.836202 | orchestrator | 2025-06-02 20:17:47 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:47.836303 | orchestrator | 2025-06-02 20:17:47 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:17:47.837892 | orchestrator | 2025-06-02 20:17:47 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:47.839611 | orchestrator | 2025-06-02 20:17:47 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:47.839664 | orchestrator | 2025-06-02 20:17:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:50.870806 | orchestrator | 2025-06-02 20:17:50 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:50.871096 | orchestrator | 2025-06-02 20:17:50 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:17:50.871811 | orchestrator | 2025-06-02 20:17:50 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:50.872759 | orchestrator | 2025-06-02 20:17:50 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:50.872790 | orchestrator | 2025-06-02 20:17:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:53.910905 | orchestrator | 2025-06-02 20:17:53 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:53.911026 | orchestrator | 2025-06-02 20:17:53 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:17:53.911924 | orchestrator | 2025-06-02 20:17:53 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:53.912575 | orchestrator | 2025-06-02 20:17:53 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:53.912603 | orchestrator | 2025-06-02 20:17:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:56.945194 | orchestrator | 2025-06-02 20:17:56 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:17:56.945745 | orchestrator | 2025-06-02 20:17:56 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:17:56.946750 | orchestrator | 2025-06-02 20:17:56 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:17:56.947661 | orchestrator | 2025-06-02 20:17:56 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:17:56.947688 | orchestrator | 2025-06-02 20:17:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:59.999567 | orchestrator | 2025-06-02 20:17:59 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:00.000215 | orchestrator | 2025-06-02 20:17:59 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:18:00.002648 | orchestrator | 2025-06-02 20:18:00 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:00.003716 | orchestrator | 2025-06-02 20:18:00 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:00.003775 | orchestrator | 2025-06-02 20:18:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:03.040460 | orchestrator | 2025-06-02 20:18:03 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:03.040787 | orchestrator | 2025-06-02 20:18:03 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:18:03.041693 | orchestrator | 2025-06-02 20:18:03 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:03.042084 | orchestrator | 2025-06-02 20:18:03 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:03.044012 | orchestrator | 2025-06-02 20:18:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:06.076969 | orchestrator | 2025-06-02 20:18:06 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:06.078475 | orchestrator | 2025-06-02 20:18:06 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:18:06.079059 | orchestrator | 2025-06-02 20:18:06 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:06.080990 | orchestrator | 2025-06-02 20:18:06 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:06.081019 | orchestrator | 2025-06-02 20:18:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:09.129492 | orchestrator | 2025-06-02 20:18:09 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:09.130158 | orchestrator | 2025-06-02 20:18:09 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:18:09.131010 | orchestrator | 2025-06-02 20:18:09 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:09.132121 | orchestrator | 2025-06-02 20:18:09 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:09.132197 | orchestrator | 2025-06-02 20:18:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:12.179557 | orchestrator | 2025-06-02 20:18:12 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:12.180220 | orchestrator | 2025-06-02 20:18:12 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:18:12.180965 | orchestrator | 2025-06-02 20:18:12 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:12.182175 | orchestrator | 2025-06-02 20:18:12 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:12.182210 | orchestrator | 2025-06-02 20:18:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:15.226242 | orchestrator | 2025-06-02 20:18:15 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:15.245262 | orchestrator | 2025-06-02 20:18:15 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:18:15.246989 | orchestrator | 2025-06-02 20:18:15 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:15.247032 | orchestrator | 2025-06-02 20:18:15 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:15.247045 | orchestrator | 2025-06-02 20:18:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:18.284103 | orchestrator | 2025-06-02 20:18:18 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:18.284783 | orchestrator | 2025-06-02 20:18:18 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state STARTED 2025-06-02 20:18:18.286136 | orchestrator | 2025-06-02 20:18:18 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:18.287291 | orchestrator | 2025-06-02 20:18:18 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:18.287509 | orchestrator | 2025-06-02 20:18:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:21.327978 | orchestrator | 2025-06-02 20:18:21 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:21.328304 | orchestrator | 2025-06-02 20:18:21 | INFO  | Task d718e21f-333f-4610-b476-71b435c796ea is in state SUCCESS 2025-06-02 20:18:21.332070 | orchestrator | 2025-06-02 20:18:21 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:21.335976 | orchestrator | 2025-06-02 20:18:21 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:21.336039 | orchestrator | 2025-06-02 20:18:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:24.381186 | orchestrator | 2025-06-02 20:18:24 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:24.383891 | orchestrator | 2025-06-02 20:18:24 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:24.385883 | orchestrator | 2025-06-02 20:18:24 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:24.385951 | orchestrator | 2025-06-02 20:18:24 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:24.386108 | orchestrator | 2025-06-02 20:18:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:27.431168 | orchestrator | 2025-06-02 20:18:27 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:27.431595 | orchestrator | 2025-06-02 20:18:27 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:27.432527 | orchestrator | 2025-06-02 20:18:27 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:27.433656 | orchestrator | 2025-06-02 20:18:27 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:27.433700 | orchestrator | 2025-06-02 20:18:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:30.477932 | orchestrator | 2025-06-02 20:18:30 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:30.478524 | orchestrator | 2025-06-02 20:18:30 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:30.480823 | orchestrator | 2025-06-02 20:18:30 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:30.481522 | orchestrator | 2025-06-02 20:18:30 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:30.481557 | orchestrator | 2025-06-02 20:18:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:33.535748 | orchestrator | 2025-06-02 20:18:33 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:33.537422 | orchestrator | 2025-06-02 20:18:33 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:33.539906 | orchestrator | 2025-06-02 20:18:33 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:33.540869 | orchestrator | 2025-06-02 20:18:33 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:33.541018 | orchestrator | 2025-06-02 20:18:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:36.588282 | orchestrator | 2025-06-02 20:18:36 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:36.588682 | orchestrator | 2025-06-02 20:18:36 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:36.589891 | orchestrator | 2025-06-02 20:18:36 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:36.592532 | orchestrator | 2025-06-02 20:18:36 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:36.592593 | orchestrator | 2025-06-02 20:18:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:39.632097 | orchestrator | 2025-06-02 20:18:39 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:39.634600 | orchestrator | 2025-06-02 20:18:39 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:39.635205 | orchestrator | 2025-06-02 20:18:39 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:39.637261 | orchestrator | 2025-06-02 20:18:39 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:39.637457 | orchestrator | 2025-06-02 20:18:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:42.672919 | orchestrator | 2025-06-02 20:18:42 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:42.673388 | orchestrator | 2025-06-02 20:18:42 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:42.674247 | orchestrator | 2025-06-02 20:18:42 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:42.674875 | orchestrator | 2025-06-02 20:18:42 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:42.674971 | orchestrator | 2025-06-02 20:18:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:45.709743 | orchestrator | 2025-06-02 20:18:45 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:45.710173 | orchestrator | 2025-06-02 20:18:45 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:45.710943 | orchestrator | 2025-06-02 20:18:45 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:45.711508 | orchestrator | 2025-06-02 20:18:45 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:45.711539 | orchestrator | 2025-06-02 20:18:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:48.748026 | orchestrator | 2025-06-02 20:18:48 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:48.748280 | orchestrator | 2025-06-02 20:18:48 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:48.749953 | orchestrator | 2025-06-02 20:18:48 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:48.750633 | orchestrator | 2025-06-02 20:18:48 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:48.750656 | orchestrator | 2025-06-02 20:18:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:51.786690 | orchestrator | 2025-06-02 20:18:51 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:51.787926 | orchestrator | 2025-06-02 20:18:51 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:51.787964 | orchestrator | 2025-06-02 20:18:51 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:51.787969 | orchestrator | 2025-06-02 20:18:51 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:51.787974 | orchestrator | 2025-06-02 20:18:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:54.825801 | orchestrator | 2025-06-02 20:18:54 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:54.827083 | orchestrator | 2025-06-02 20:18:54 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:54.828182 | orchestrator | 2025-06-02 20:18:54 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:54.829259 | orchestrator | 2025-06-02 20:18:54 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:54.829438 | orchestrator | 2025-06-02 20:18:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:57.863349 | orchestrator | 2025-06-02 20:18:57 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:18:57.863854 | orchestrator | 2025-06-02 20:18:57 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:18:57.865539 | orchestrator | 2025-06-02 20:18:57 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:18:57.866349 | orchestrator | 2025-06-02 20:18:57 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:18:57.866389 | orchestrator | 2025-06-02 20:18:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:00.910190 | orchestrator | 2025-06-02 20:19:00 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:00.910274 | orchestrator | 2025-06-02 20:19:00 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:00.912827 | orchestrator | 2025-06-02 20:19:00 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:00.913783 | orchestrator | 2025-06-02 20:19:00 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:00.913814 | orchestrator | 2025-06-02 20:19:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:03.956220 | orchestrator | 2025-06-02 20:19:03 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:03.958316 | orchestrator | 2025-06-02 20:19:03 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:03.959015 | orchestrator | 2025-06-02 20:19:03 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:03.960106 | orchestrator | 2025-06-02 20:19:03 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:03.960191 | orchestrator | 2025-06-02 20:19:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:07.000733 | orchestrator | 2025-06-02 20:19:06 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:07.001005 | orchestrator | 2025-06-02 20:19:06 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:07.002528 | orchestrator | 2025-06-02 20:19:07 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:07.003847 | orchestrator | 2025-06-02 20:19:07 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:07.003908 | orchestrator | 2025-06-02 20:19:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:10.046266 | orchestrator | 2025-06-02 20:19:10 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:10.048401 | orchestrator | 2025-06-02 20:19:10 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:10.050501 | orchestrator | 2025-06-02 20:19:10 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:10.052552 | orchestrator | 2025-06-02 20:19:10 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:10.052730 | orchestrator | 2025-06-02 20:19:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:13.089739 | orchestrator | 2025-06-02 20:19:13 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:13.090105 | orchestrator | 2025-06-02 20:19:13 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:13.090956 | orchestrator | 2025-06-02 20:19:13 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:13.092255 | orchestrator | 2025-06-02 20:19:13 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:13.092380 | orchestrator | 2025-06-02 20:19:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:16.125363 | orchestrator | 2025-06-02 20:19:16 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:16.125456 | orchestrator | 2025-06-02 20:19:16 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:16.125467 | orchestrator | 2025-06-02 20:19:16 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:16.126188 | orchestrator | 2025-06-02 20:19:16 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:16.126217 | orchestrator | 2025-06-02 20:19:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:19.168789 | orchestrator | 2025-06-02 20:19:19 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:19.170584 | orchestrator | 2025-06-02 20:19:19 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:19.171533 | orchestrator | 2025-06-02 20:19:19 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:19.172971 | orchestrator | 2025-06-02 20:19:19 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:19.173089 | orchestrator | 2025-06-02 20:19:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:22.204019 | orchestrator | 2025-06-02 20:19:22 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:22.205697 | orchestrator | 2025-06-02 20:19:22 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:22.207828 | orchestrator | 2025-06-02 20:19:22 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:22.210303 | orchestrator | 2025-06-02 20:19:22 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:22.210341 | orchestrator | 2025-06-02 20:19:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:25.254416 | orchestrator | 2025-06-02 20:19:25 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:25.254513 | orchestrator | 2025-06-02 20:19:25 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:25.256613 | orchestrator | 2025-06-02 20:19:25 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:25.256710 | orchestrator | 2025-06-02 20:19:25 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:25.256721 | orchestrator | 2025-06-02 20:19:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:28.284320 | orchestrator | 2025-06-02 20:19:28 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:28.284917 | orchestrator | 2025-06-02 20:19:28 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:28.286064 | orchestrator | 2025-06-02 20:19:28 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:28.286949 | orchestrator | 2025-06-02 20:19:28 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:28.287020 | orchestrator | 2025-06-02 20:19:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:31.316324 | orchestrator | 2025-06-02 20:19:31 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:31.319602 | orchestrator | 2025-06-02 20:19:31 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:31.321731 | orchestrator | 2025-06-02 20:19:31 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:31.323645 | orchestrator | 2025-06-02 20:19:31 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:31.323700 | orchestrator | 2025-06-02 20:19:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:34.369886 | orchestrator | 2025-06-02 20:19:34 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:34.373223 | orchestrator | 2025-06-02 20:19:34 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:34.375896 | orchestrator | 2025-06-02 20:19:34 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state STARTED 2025-06-02 20:19:34.378557 | orchestrator | 2025-06-02 20:19:34 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:34.378742 | orchestrator | 2025-06-02 20:19:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:37.419561 | orchestrator | 2025-06-02 20:19:37 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state STARTED 2025-06-02 20:19:37.419648 | orchestrator | 2025-06-02 20:19:37 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:37.420596 | orchestrator | 2025-06-02 20:19:37 | INFO  | Task 72e369ff-5b06-43dd-983a-c6d62804c30b is in state SUCCESS 2025-06-02 20:19:37.421658 | orchestrator | 2025-06-02 20:19:37.421696 | orchestrator | 2025-06-02 20:19:37.421705 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-02 20:19:37.421712 | orchestrator | 2025-06-02 20:19:37.421718 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-02 20:19:37.421725 | orchestrator | Monday 02 June 2025 20:17:41 +0000 (0:00:00.098) 0:00:00.098 *********** 2025-06-02 20:19:37.421732 | orchestrator | changed: [localhost] 2025-06-02 20:19:37.421740 | orchestrator | 2025-06-02 20:19:37.421747 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-02 20:19:37.421752 | orchestrator | Monday 02 June 2025 20:17:43 +0000 (0:00:01.636) 0:00:01.736 *********** 2025-06-02 20:19:37.421756 | orchestrator | changed: [localhost] 2025-06-02 20:19:37.421760 | orchestrator | 2025-06-02 20:19:37.421764 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-02 20:19:37.421768 | orchestrator | Monday 02 June 2025 20:18:15 +0000 (0:00:32.344) 0:00:34.081 *********** 2025-06-02 20:19:37.421772 | orchestrator | changed: [localhost] 2025-06-02 20:19:37.421776 | orchestrator | 2025-06-02 20:19:37.421779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:19:37.421783 | orchestrator | 2025-06-02 20:19:37.421787 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:19:37.421791 | orchestrator | Monday 02 June 2025 20:18:19 +0000 (0:00:04.193) 0:00:38.274 *********** 2025-06-02 20:19:37.421795 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:37.421799 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:37.421802 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:37.421806 | orchestrator | 2025-06-02 20:19:37.421810 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:19:37.421814 | orchestrator | Monday 02 June 2025 20:18:20 +0000 (0:00:00.291) 0:00:38.566 *********** 2025-06-02 20:19:37.421833 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-02 20:19:37.421837 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-02 20:19:37.421842 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-02 20:19:37.421845 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-02 20:19:37.421849 | orchestrator | 2025-06-02 20:19:37.421853 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-02 20:19:37.421857 | orchestrator | skipping: no hosts matched 2025-06-02 20:19:37.421861 | orchestrator | 2025-06-02 20:19:37.421865 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:19:37.421869 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:19:37.421875 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:19:37.421880 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:19:37.421885 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:19:37.421888 | orchestrator | 2025-06-02 20:19:37.421892 | orchestrator | 2025-06-02 20:19:37.421896 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:19:37.421900 | orchestrator | Monday 02 June 2025 20:18:20 +0000 (0:00:00.468) 0:00:39.034 *********** 2025-06-02 20:19:37.421903 | orchestrator | =============================================================================== 2025-06-02 20:19:37.421907 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 32.34s 2025-06-02 20:19:37.421911 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.19s 2025-06-02 20:19:37.421915 | orchestrator | Ensure the destination directory exists --------------------------------- 1.64s 2025-06-02 20:19:37.421919 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-06-02 20:19:37.421922 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-06-02 20:19:37.421926 | orchestrator | 2025-06-02 20:19:37.421930 | orchestrator | 2025-06-02 20:19:37.421934 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:19:37.421937 | orchestrator | 2025-06-02 20:19:37.421941 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:19:37.421945 | orchestrator | Monday 02 June 2025 20:18:25 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-06-02 20:19:37.421948 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:37.421952 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:37.421956 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:37.421960 | orchestrator | 2025-06-02 20:19:37.421964 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:19:37.421967 | orchestrator | Monday 02 June 2025 20:18:26 +0000 (0:00:00.307) 0:00:00.584 *********** 2025-06-02 20:19:37.421971 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-02 20:19:37.421975 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-02 20:19:37.421979 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-02 20:19:37.421982 | orchestrator | 2025-06-02 20:19:37.421986 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-02 20:19:37.421990 | orchestrator | 2025-06-02 20:19:37.421994 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 20:19:37.421997 | orchestrator | Monday 02 June 2025 20:18:26 +0000 (0:00:00.445) 0:00:01.030 *********** 2025-06-02 20:19:37.422001 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:19:37.422005 | orchestrator | 2025-06-02 20:19:37.422009 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-02 20:19:37.422049 | orchestrator | Monday 02 June 2025 20:18:27 +0000 (0:00:00.535) 0:00:01.565 *********** 2025-06-02 20:19:37.422063 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-02 20:19:37.422067 | orchestrator | 2025-06-02 20:19:37.422072 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-02 20:19:37.422078 | orchestrator | Monday 02 June 2025 20:18:30 +0000 (0:00:03.717) 0:00:05.282 *********** 2025-06-02 20:19:37.422084 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-02 20:19:37.422090 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-02 20:19:37.422100 | orchestrator | 2025-06-02 20:19:37.422107 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-02 20:19:37.422113 | orchestrator | Monday 02 June 2025 20:18:37 +0000 (0:00:06.765) 0:00:12.047 *********** 2025-06-02 20:19:37.422119 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:19:37.422125 | orchestrator | 2025-06-02 20:19:37.422131 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-02 20:19:37.422136 | orchestrator | Monday 02 June 2025 20:18:40 +0000 (0:00:03.203) 0:00:15.251 *********** 2025-06-02 20:19:37.422142 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:19:37.422148 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-02 20:19:37.422154 | orchestrator | 2025-06-02 20:19:37.422159 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-02 20:19:37.422165 | orchestrator | Monday 02 June 2025 20:18:44 +0000 (0:00:03.954) 0:00:19.205 *********** 2025-06-02 20:19:37.422171 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:19:37.422178 | orchestrator | 2025-06-02 20:19:37.422184 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-02 20:19:37.422190 | orchestrator | Monday 02 June 2025 20:18:48 +0000 (0:00:03.428) 0:00:22.634 *********** 2025-06-02 20:19:37.422196 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-02 20:19:37.422203 | orchestrator | 2025-06-02 20:19:37.422210 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 20:19:37.422217 | orchestrator | Monday 02 June 2025 20:18:52 +0000 (0:00:04.147) 0:00:26.781 *********** 2025-06-02 20:19:37.422221 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:37.422225 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:37.422229 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:37.422233 | orchestrator | 2025-06-02 20:19:37.422236 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-02 20:19:37.422240 | orchestrator | Monday 02 June 2025 20:18:52 +0000 (0:00:00.533) 0:00:27.315 *********** 2025-06-02 20:19:37.422247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422294 | orchestrator | 2025-06-02 20:19:37.422298 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-02 20:19:37.422302 | orchestrator | Monday 02 June 2025 20:18:53 +0000 (0:00:01.010) 0:00:28.326 *********** 2025-06-02 20:19:37.422306 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:37.422309 | orchestrator | 2025-06-02 20:19:37.422313 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-02 20:19:37.422317 | orchestrator | Monday 02 June 2025 20:18:54 +0000 (0:00:00.113) 0:00:28.439 *********** 2025-06-02 20:19:37.422321 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:37.422324 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:37.422328 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:37.422332 | orchestrator | 2025-06-02 20:19:37.422336 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 20:19:37.422339 | orchestrator | Monday 02 June 2025 20:18:54 +0000 (0:00:00.383) 0:00:28.823 *********** 2025-06-02 20:19:37.422343 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:19:37.422347 | orchestrator | 2025-06-02 20:19:37.422351 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-02 20:19:37.422355 | orchestrator | Monday 02 June 2025 20:18:54 +0000 (0:00:00.477) 0:00:29.301 *********** 2025-06-02 20:19:37.422359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422375 | orchestrator | 2025-06-02 20:19:37.422381 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-02 20:19:37.422385 | orchestrator | Monday 02 June 2025 20:18:56 +0000 (0:00:01.496) 0:00:30.797 *********** 2025-06-02 20:19:37.422389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:19:37.422393 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:37.422405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:19:37.422409 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:37.422413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:19:37.422420 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:37.422423 | orchestrator | 2025-06-02 20:19:37.422427 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-02 20:19:37.422431 | orchestrator | Monday 02 June 2025 20:18:57 +0000 (0:00:00.622) 0:00:31.419 *********** 2025-06-02 20:19:37.422435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:19:37.422439 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:37.422446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:19:37.422450 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:37.422457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:19:37.422463 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:37.422467 | orchestrator | 2025-06-02 20:19:37.422471 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-02 20:19:37.422475 | orchestrator | Monday 02 June 2025 20:18:57 +0000 (0:00:00.959) 0:00:32.379 *********** 2025-06-02 20:19:37.422478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422495 | orchestrator | 2025-06-02 20:19:37.422499 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-02 20:19:37.422503 | orchestrator | Monday 02 June 2025 20:18:59 +0000 (0:00:01.708) 0:00:34.087 *********** 2025-06-02 20:19:37.422509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422524 | orchestrator | 2025-06-02 20:19:37.422528 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-02 20:19:37.422532 | orchestrator | Monday 02 June 2025 20:19:02 +0000 (0:00:03.149) 0:00:37.236 *********** 2025-06-02 20:19:37.422536 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 20:19:37.422539 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 20:19:37.422543 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 20:19:37.422547 | orchestrator | 2025-06-02 20:19:37.422551 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-02 20:19:37.422557 | orchestrator | Monday 02 June 2025 20:19:05 +0000 (0:00:02.488) 0:00:39.725 *********** 2025-06-02 20:19:37.422561 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:37.422564 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:37.422568 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:37.422572 | orchestrator | 2025-06-02 20:19:37.422576 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-02 20:19:37.422580 | orchestrator | Monday 02 June 2025 20:19:07 +0000 (0:00:02.026) 0:00:41.751 *********** 2025-06-02 20:19:37.422583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:19:37.422590 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:37.422596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:19:37.422600 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:37.422604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:19:37.422608 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:37.422612 | orchestrator | 2025-06-02 20:19:37.422616 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-02 20:19:37.422620 | orchestrator | Monday 02 June 2025 20:19:07 +0000 (0:00:00.430) 0:00:42.182 *********** 2025-06-02 20:19:37.422627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:19:37.422644 | orchestrator | 2025-06-02 20:19:37.422648 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-02 20:19:37.422652 | orchestrator | Monday 02 June 2025 20:19:09 +0000 (0:00:01.239) 0:00:43.421 *********** 2025-06-02 20:19:37.422656 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:37.422660 | orchestrator | 2025-06-02 20:19:37.422664 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-02 20:19:37.422667 | orchestrator | Monday 02 June 2025 20:19:11 +0000 (0:00:02.104) 0:00:45.525 *********** 2025-06-02 20:19:37.422671 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:37.422675 | orchestrator | 2025-06-02 20:19:37.422679 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-02 20:19:37.422682 | orchestrator | Monday 02 June 2025 20:19:13 +0000 (0:00:02.657) 0:00:48.183 *********** 2025-06-02 20:19:37.422686 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:37.422690 | orchestrator | 2025-06-02 20:19:37.422694 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 20:19:37.422697 | orchestrator | Monday 02 June 2025 20:19:28 +0000 (0:00:14.465) 0:01:02.648 *********** 2025-06-02 20:19:37.422701 | orchestrator | 2025-06-02 20:19:37.422705 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 20:19:37.422709 | orchestrator | Monday 02 June 2025 20:19:28 +0000 (0:00:00.071) 0:01:02.719 *********** 2025-06-02 20:19:37.422712 | orchestrator | 2025-06-02 20:19:37.422716 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 20:19:37.422720 | orchestrator | Monday 02 June 2025 20:19:28 +0000 (0:00:00.072) 0:01:02.791 *********** 2025-06-02 20:19:37.422724 | orchestrator | 2025-06-02 20:19:37.422728 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-02 20:19:37.422732 | orchestrator | Monday 02 June 2025 20:19:28 +0000 (0:00:00.079) 0:01:02.870 *********** 2025-06-02 20:19:37.422735 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:37.422739 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:37.422743 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:37.422747 | orchestrator | 2025-06-02 20:19:37.422750 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:19:37.422754 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:19:37.422759 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:19:37.422762 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:19:37.422773 | orchestrator | 2025-06-02 20:19:37.422777 | orchestrator | 2025-06-02 20:19:37.422781 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:19:37.422785 | orchestrator | Monday 02 June 2025 20:19:36 +0000 (0:00:08.185) 0:01:11.056 *********** 2025-06-02 20:19:37.422789 | orchestrator | =============================================================================== 2025-06-02 20:19:37.422795 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.46s 2025-06-02 20:19:37.422798 | orchestrator | placement : Restart placement-api container ----------------------------- 8.19s 2025-06-02 20:19:37.422802 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.77s 2025-06-02 20:19:37.422806 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.15s 2025-06-02 20:19:37.422810 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.96s 2025-06-02 20:19:37.422814 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.72s 2025-06-02 20:19:37.422818 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.43s 2025-06-02 20:19:37.422823 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.20s 2025-06-02 20:19:37.422829 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.15s 2025-06-02 20:19:37.422834 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.66s 2025-06-02 20:19:37.422843 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.49s 2025-06-02 20:19:37.422851 | orchestrator | placement : Creating placement databases -------------------------------- 2.10s 2025-06-02 20:19:37.422857 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.03s 2025-06-02 20:19:37.422863 | orchestrator | placement : Copying over config.json files for services ----------------- 1.71s 2025-06-02 20:19:37.422869 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.50s 2025-06-02 20:19:37.422875 | orchestrator | placement : Check placement containers ---------------------------------- 1.24s 2025-06-02 20:19:37.422880 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.01s 2025-06-02 20:19:37.422886 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.96s 2025-06-02 20:19:37.422892 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.62s 2025-06-02 20:19:37.422902 | orchestrator | placement : include_tasks ----------------------------------------------- 0.54s 2025-06-02 20:19:37.425034 | orchestrator | 2025-06-02 20:19:37 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:37.425076 | orchestrator | 2025-06-02 20:19:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:40.470786 | orchestrator | 2025-06-02 20:19:40.470872 | orchestrator | 2025-06-02 20:19:40.470878 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:19:40.470883 | orchestrator | 2025-06-02 20:19:40.470887 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:19:40.470891 | orchestrator | Monday 02 June 2025 20:15:27 +0000 (0:00:00.252) 0:00:00.252 *********** 2025-06-02 20:19:40.470895 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:40.470901 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:40.470905 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:40.470909 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:19:40.470913 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:19:40.470916 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:19:40.470920 | orchestrator | 2025-06-02 20:19:40.470924 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:19:40.470928 | orchestrator | Monday 02 June 2025 20:15:27 +0000 (0:00:00.658) 0:00:00.911 *********** 2025-06-02 20:19:40.470932 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-02 20:19:40.470937 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-02 20:19:40.470940 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-02 20:19:40.470982 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-02 20:19:40.470987 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-02 20:19:40.470991 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-02 20:19:40.470995 | orchestrator | 2025-06-02 20:19:40.470998 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-02 20:19:40.471002 | orchestrator | 2025-06-02 20:19:40.471006 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 20:19:40.471010 | orchestrator | Monday 02 June 2025 20:15:28 +0000 (0:00:00.604) 0:00:01.515 *********** 2025-06-02 20:19:40.471015 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:19:40.471020 | orchestrator | 2025-06-02 20:19:40.471024 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-02 20:19:40.471028 | orchestrator | Monday 02 June 2025 20:15:29 +0000 (0:00:01.213) 0:00:02.729 *********** 2025-06-02 20:19:40.471031 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:40.471035 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:40.471039 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:19:40.471043 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:40.471047 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:19:40.471050 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:19:40.471054 | orchestrator | 2025-06-02 20:19:40.471058 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-02 20:19:40.471062 | orchestrator | Monday 02 June 2025 20:15:30 +0000 (0:00:01.229) 0:00:03.959 *********** 2025-06-02 20:19:40.471085 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:40.471094 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:40.471098 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:19:40.471102 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:19:40.471105 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:19:40.471109 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:40.471113 | orchestrator | 2025-06-02 20:19:40.471117 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-02 20:19:40.471121 | orchestrator | Monday 02 June 2025 20:15:31 +0000 (0:00:01.105) 0:00:05.064 *********** 2025-06-02 20:19:40.471125 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 20:19:40.471130 | orchestrator |  "changed": false, 2025-06-02 20:19:40.471134 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:19:40.471151 | orchestrator | } 2025-06-02 20:19:40.471156 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 20:19:40.471160 | orchestrator |  "changed": false, 2025-06-02 20:19:40.471163 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:19:40.471167 | orchestrator | } 2025-06-02 20:19:40.471171 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 20:19:40.471175 | orchestrator |  "changed": false, 2025-06-02 20:19:40.471178 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:19:40.471182 | orchestrator | } 2025-06-02 20:19:40.471186 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 20:19:40.471190 | orchestrator |  "changed": false, 2025-06-02 20:19:40.471193 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:19:40.471204 | orchestrator | } 2025-06-02 20:19:40.471208 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 20:19:40.471212 | orchestrator |  "changed": false, 2025-06-02 20:19:40.471215 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:19:40.471219 | orchestrator | } 2025-06-02 20:19:40.471223 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 20:19:40.471227 | orchestrator |  "changed": false, 2025-06-02 20:19:40.471230 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:19:40.471234 | orchestrator | } 2025-06-02 20:19:40.471238 | orchestrator | 2025-06-02 20:19:40.471242 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-02 20:19:40.471285 | orchestrator | Monday 02 June 2025 20:15:32 +0000 (0:00:00.803) 0:00:05.868 *********** 2025-06-02 20:19:40.471295 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.471299 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.471316 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.471320 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.471323 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.471327 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.471331 | orchestrator | 2025-06-02 20:19:40.471335 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-02 20:19:40.471339 | orchestrator | Monday 02 June 2025 20:15:33 +0000 (0:00:00.593) 0:00:06.461 *********** 2025-06-02 20:19:40.471342 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-02 20:19:40.471346 | orchestrator | 2025-06-02 20:19:40.471350 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-02 20:19:40.471354 | orchestrator | Monday 02 June 2025 20:15:36 +0000 (0:00:03.461) 0:00:09.922 *********** 2025-06-02 20:19:40.471366 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-02 20:19:40.471372 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-02 20:19:40.471376 | orchestrator | 2025-06-02 20:19:40.471391 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-02 20:19:40.471396 | orchestrator | Monday 02 June 2025 20:15:42 +0000 (0:00:06.227) 0:00:16.150 *********** 2025-06-02 20:19:40.471400 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:19:40.471405 | orchestrator | 2025-06-02 20:19:40.471409 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-02 20:19:40.471414 | orchestrator | Monday 02 June 2025 20:15:46 +0000 (0:00:03.173) 0:00:19.323 *********** 2025-06-02 20:19:40.471418 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:19:40.471423 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-02 20:19:40.471427 | orchestrator | 2025-06-02 20:19:40.471432 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-02 20:19:40.471436 | orchestrator | Monday 02 June 2025 20:15:49 +0000 (0:00:03.759) 0:00:23.083 *********** 2025-06-02 20:19:40.471441 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:19:40.471445 | orchestrator | 2025-06-02 20:19:40.471449 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-02 20:19:40.471453 | orchestrator | Monday 02 June 2025 20:15:53 +0000 (0:00:03.501) 0:00:26.585 *********** 2025-06-02 20:19:40.471457 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-02 20:19:40.471473 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-02 20:19:40.471477 | orchestrator | 2025-06-02 20:19:40.471481 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 20:19:40.471485 | orchestrator | Monday 02 June 2025 20:16:01 +0000 (0:00:07.794) 0:00:34.380 *********** 2025-06-02 20:19:40.471489 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.471492 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.471496 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.471500 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.471504 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.471507 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.471511 | orchestrator | 2025-06-02 20:19:40.471519 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-02 20:19:40.471523 | orchestrator | Monday 02 June 2025 20:16:02 +0000 (0:00:00.778) 0:00:35.158 *********** 2025-06-02 20:19:40.471531 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.471535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.471539 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.471543 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.471547 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.471550 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.471557 | orchestrator | 2025-06-02 20:19:40.471561 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-02 20:19:40.471565 | orchestrator | Monday 02 June 2025 20:16:04 +0000 (0:00:02.218) 0:00:37.377 *********** 2025-06-02 20:19:40.471568 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:40.471572 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:40.471576 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:19:40.471580 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:19:40.471584 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:19:40.471587 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:40.471591 | orchestrator | 2025-06-02 20:19:40.471595 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 20:19:40.471599 | orchestrator | Monday 02 June 2025 20:16:05 +0000 (0:00:01.639) 0:00:39.016 *********** 2025-06-02 20:19:40.471603 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.471607 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.471610 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.471614 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.471618 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.471622 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.471625 | orchestrator | 2025-06-02 20:19:40.471629 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-02 20:19:40.471633 | orchestrator | Monday 02 June 2025 20:16:07 +0000 (0:00:02.125) 0:00:41.142 *********** 2025-06-02 20:19:40.471639 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.471652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.471659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.471667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.471671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.471675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.471679 | orchestrator | 2025-06-02 20:19:40.471683 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-02 20:19:40.471687 | orchestrator | Monday 02 June 2025 20:16:10 +0000 (0:00:02.945) 0:00:44.088 *********** 2025-06-02 20:19:40.471691 | orchestrator | [WARNING]: Skipped 2025-06-02 20:19:40.471696 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-02 20:19:40.471700 | orchestrator | due to this access issue: 2025-06-02 20:19:40.471706 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-02 20:19:40.471710 | orchestrator | a directory 2025-06-02 20:19:40.471714 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:19:40.471718 | orchestrator | 2025-06-02 20:19:40.471722 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 20:19:40.471728 | orchestrator | Monday 02 June 2025 20:16:11 +0000 (0:00:00.860) 0:00:44.949 *********** 2025-06-02 20:19:40.471732 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:19:40.471737 | orchestrator | 2025-06-02 20:19:40.471741 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-02 20:19:40.471744 | orchestrator | Monday 02 June 2025 20:16:13 +0000 (0:00:01.249) 0:00:46.198 *********** 2025-06-02 20:19:40.471748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.471756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.471760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.471764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.471775 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.471782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.471786 | orchestrator | 2025-06-02 20:19:40.471790 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-02 20:19:40.471794 | orchestrator | Monday 02 June 2025 20:16:16 +0000 (0:00:03.248) 0:00:49.446 *********** 2025-06-02 20:19:40.471798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.471802 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.471806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.471810 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.471817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.471823 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.471831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.471835 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.471839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.471843 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.471847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.471851 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.471854 | orchestrator | 2025-06-02 20:19:40.471858 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-02 20:19:40.471862 | orchestrator | Monday 02 June 2025 20:16:18 +0000 (0:00:02.612) 0:00:52.058 *********** 2025-06-02 20:19:40.471866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.471870 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.471880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.471891 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.471895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.471899 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.471903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.471907 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.471911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.471915 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.471919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.471926 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.471930 | orchestrator | 2025-06-02 20:19:40.471933 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-02 20:19:40.471937 | orchestrator | Monday 02 June 2025 20:16:22 +0000 (0:00:03.224) 0:00:55.283 *********** 2025-06-02 20:19:40.471941 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.471945 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.471951 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.471955 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.471958 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.471962 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.471966 | orchestrator | 2025-06-02 20:19:40.471970 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-02 20:19:40.471976 | orchestrator | Monday 02 June 2025 20:16:24 +0000 (0:00:02.760) 0:00:58.043 *********** 2025-06-02 20:19:40.471980 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.471984 | orchestrator | 2025-06-02 20:19:40.471987 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-02 20:19:40.471991 | orchestrator | Monday 02 June 2025 20:16:25 +0000 (0:00:00.139) 0:00:58.183 *********** 2025-06-02 20:19:40.471995 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.471999 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.472002 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.472006 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472010 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472014 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472017 | orchestrator | 2025-06-02 20:19:40.472021 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-02 20:19:40.472025 | orchestrator | Monday 02 June 2025 20:16:25 +0000 (0:00:00.798) 0:00:58.982 *********** 2025-06-02 20:19:40.472029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.472033 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.472037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.472041 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.472045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472052 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testb2025-06-02 20:19:40 | INFO  | Task d8ba7b09-e4b8-4fb3-abcc-1bf09d711f89 is in state SUCCESS 2025-06-02 20:19:40.472193 | orchestrator | ed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.472200 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.472204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472208 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472216 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472220 | orchestrator | 2025-06-02 20:19:40.472224 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-02 20:19:40.472228 | orchestrator | Monday 02 June 2025 20:16:28 +0000 (0:00:02.950) 0:01:01.932 *********** 2025-06-02 20:19:40.472232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.472273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.472282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.472289 | orchestrator | 2025-06-02 20:19:40.472294 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-02 20:19:40.472298 | orchestrator | Monday 02 June 2025 20:16:32 +0000 (0:00:04.007) 0:01:05.940 *********** 2025-06-02 20:19:40.472302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.472316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.472332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.472336 | orchestrator | 2025-06-02 20:19:40.472340 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-02 20:19:40.472346 | orchestrator | Monday 02 June 2025 20:16:39 +0000 (0:00:06.792) 0:01:12.733 *********** 2025-06-02 20:19:40.472354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472358 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472367 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472378 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472403 | orchestrator | 2025-06-02 20:19:40.472407 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-02 20:19:40.472411 | orchestrator | Monday 02 June 2025 20:16:43 +0000 (0:00:03.467) 0:01:16.201 *********** 2025-06-02 20:19:40.472415 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472419 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472423 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472427 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:40.472431 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:40.472435 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:40.472439 | orchestrator | 2025-06-02 20:19:40.472443 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-02 20:19:40.472451 | orchestrator | Monday 02 June 2025 20:16:46 +0000 (0:00:03.129) 0:01:19.330 *********** 2025-06-02 20:19:40.472455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472459 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472468 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472476 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.472505 | orchestrator | 2025-06-02 20:19:40.472509 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-02 20:19:40.472513 | orchestrator | Monday 02 June 2025 20:16:51 +0000 (0:00:04.840) 0:01:24.171 *********** 2025-06-02 20:19:40.472517 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.472521 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.472525 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472529 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.472533 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472537 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472541 | orchestrator | 2025-06-02 20:19:40.472545 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-02 20:19:40.472549 | orchestrator | Monday 02 June 2025 20:16:53 +0000 (0:00:02.348) 0:01:26.520 *********** 2025-06-02 20:19:40.472553 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.472557 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.472560 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.472565 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472569 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472573 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472576 | orchestrator | 2025-06-02 20:19:40.472581 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-02 20:19:40.472585 | orchestrator | Monday 02 June 2025 20:16:55 +0000 (0:00:02.030) 0:01:28.550 *********** 2025-06-02 20:19:40.472589 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.472593 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.472597 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.472600 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472604 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472608 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472612 | orchestrator | 2025-06-02 20:19:40.472616 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-02 20:19:40.472620 | orchestrator | Monday 02 June 2025 20:16:58 +0000 (0:00:03.006) 0:01:31.557 *********** 2025-06-02 20:19:40.472624 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.472628 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472632 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.472636 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.472640 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472644 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472648 | orchestrator | 2025-06-02 20:19:40.472655 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-02 20:19:40.472659 | orchestrator | Monday 02 June 2025 20:17:00 +0000 (0:00:02.273) 0:01:33.830 *********** 2025-06-02 20:19:40.472666 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.472670 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.472674 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472700 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.472704 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472708 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472712 | orchestrator | 2025-06-02 20:19:40.472719 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-02 20:19:40.472723 | orchestrator | Monday 02 June 2025 20:17:02 +0000 (0:00:02.184) 0:01:36.015 *********** 2025-06-02 20:19:40.472727 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.472731 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.472735 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.472740 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472744 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472748 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472752 | orchestrator | 2025-06-02 20:19:40.472756 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-02 20:19:40.472760 | orchestrator | Monday 02 June 2025 20:17:04 +0000 (0:00:02.094) 0:01:38.110 *********** 2025-06-02 20:19:40.472764 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:19:40.472768 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.472806 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:19:40.472811 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.472815 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:19:40.472819 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.472823 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:19:40.472827 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472839 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:19:40.472844 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472848 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:19:40.472852 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472857 | orchestrator | 2025-06-02 20:19:40.472861 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-02 20:19:40.472871 | orchestrator | Monday 02 June 2025 20:17:06 +0000 (0:00:02.010) 0:01:40.120 *********** 2025-06-02 20:19:40.472876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.472881 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.472885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.472894 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.472905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.472910 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.472915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472919 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.472939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472945 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.472949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.472958 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.472962 | orchestrator | 2025-06-02 20:19:40.472967 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-02 20:19:40.472972 | orchestrator | Monday 02 June 2025 20:17:08 +0000 (0:00:01.917) 0:01:42.037 *********** 2025-06-02 20:19:40.472977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.472982 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.473094 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.473102 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.473110 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.473122 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.473132 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473136 | orchestrator | 2025-06-02 20:19:40.473140 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-02 20:19:40.473144 | orchestrator | Monday 02 June 2025 20:17:11 +0000 (0:00:02.580) 0:01:44.618 *********** 2025-06-02 20:19:40.473148 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473151 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473158 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473162 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473165 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473169 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473173 | orchestrator | 2025-06-02 20:19:40.473177 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-02 20:19:40.473180 | orchestrator | Monday 02 June 2025 20:17:13 +0000 (0:00:02.428) 0:01:47.047 *********** 2025-06-02 20:19:40.473184 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473188 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473192 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473195 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:19:40.473199 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:19:40.473203 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:19:40.473206 | orchestrator | 2025-06-02 20:19:40.473210 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-02 20:19:40.473214 | orchestrator | Monday 02 June 2025 20:17:17 +0000 (0:00:04.075) 0:01:51.122 *********** 2025-06-02 20:19:40.473218 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473221 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473225 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473229 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473232 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473236 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473240 | orchestrator | 2025-06-02 20:19:40.473243 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-02 20:19:40.473262 | orchestrator | Monday 02 June 2025 20:17:22 +0000 (0:00:04.063) 0:01:55.185 *********** 2025-06-02 20:19:40.473266 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473270 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473274 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473284 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473287 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473291 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473295 | orchestrator | 2025-06-02 20:19:40.473299 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-02 20:19:40.473302 | orchestrator | Monday 02 June 2025 20:17:24 +0000 (0:00:02.736) 0:01:57.922 *********** 2025-06-02 20:19:40.473306 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473310 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473314 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473317 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473321 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473325 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473329 | orchestrator | 2025-06-02 20:19:40.473332 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-02 20:19:40.473336 | orchestrator | Monday 02 June 2025 20:17:27 +0000 (0:00:02.523) 0:02:00.445 *********** 2025-06-02 20:19:40.473340 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473344 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473347 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473351 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473355 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473358 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473362 | orchestrator | 2025-06-02 20:19:40.473366 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-02 20:19:40.473370 | orchestrator | Monday 02 June 2025 20:17:29 +0000 (0:00:01.848) 0:02:02.293 *********** 2025-06-02 20:19:40.473373 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473377 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473381 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473385 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473388 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473392 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473396 | orchestrator | 2025-06-02 20:19:40.473399 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-02 20:19:40.473403 | orchestrator | Monday 02 June 2025 20:17:31 +0000 (0:00:02.256) 0:02:04.550 *********** 2025-06-02 20:19:40.473407 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473410 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473414 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473418 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473421 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473425 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473429 | orchestrator | 2025-06-02 20:19:40.473433 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-02 20:19:40.473436 | orchestrator | Monday 02 June 2025 20:17:33 +0000 (0:00:02.356) 0:02:06.906 *********** 2025-06-02 20:19:40.473440 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473444 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473448 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473451 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473455 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473459 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473463 | orchestrator | 2025-06-02 20:19:40.473466 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-02 20:19:40.473470 | orchestrator | Monday 02 June 2025 20:17:36 +0000 (0:00:02.862) 0:02:09.768 *********** 2025-06-02 20:19:40.473474 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473478 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473481 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473485 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473489 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473492 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473499 | orchestrator | 2025-06-02 20:19:40.473503 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-02 20:19:40.473510 | orchestrator | Monday 02 June 2025 20:17:38 +0000 (0:00:02.198) 0:02:11.967 *********** 2025-06-02 20:19:40.473513 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:19:40.473518 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473521 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:19:40.473528 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473532 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:19:40.473536 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:19:40.473540 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473543 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473547 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:19:40.473551 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473555 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:19:40.473558 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473562 | orchestrator | 2025-06-02 20:19:40.473566 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-02 20:19:40.473570 | orchestrator | Monday 02 June 2025 20:17:41 +0000 (0:00:02.861) 0:02:14.829 *********** 2025-06-02 20:19:40.473574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.473578 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.473585 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:19:40.473597 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.473610 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.473618 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:19:40.473626 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473630 | orchestrator | 2025-06-02 20:19:40.473633 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-02 20:19:40.473637 | orchestrator | Monday 02 June 2025 20:17:44 +0000 (0:00:02.562) 0:02:17.392 *********** 2025-06-02 20:19:40.473641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.473649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.473661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.473666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:19:40.473670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.473674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:19:40.473681 | orchestrator | 2025-06-02 20:19:40.473685 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 20:19:40.473689 | orchestrator | Monday 02 June 2025 20:17:48 +0000 (0:00:03.838) 0:02:21.230 *********** 2025-06-02 20:19:40.473693 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:40.473696 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:40.473700 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:40.473704 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:19:40.473707 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:19:40.473711 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:19:40.473715 | orchestrator | 2025-06-02 20:19:40.473719 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-02 20:19:40.473723 | orchestrator | Monday 02 June 2025 20:17:48 +0000 (0:00:00.513) 0:02:21.744 *********** 2025-06-02 20:19:40.473727 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:40.473731 | orchestrator | 2025-06-02 20:19:40.473736 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-02 20:19:40.473740 | orchestrator | Monday 02 June 2025 20:17:50 +0000 (0:00:02.260) 0:02:24.004 *********** 2025-06-02 20:19:40.473744 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:40.473748 | orchestrator | 2025-06-02 20:19:40.473753 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-02 20:19:40.473757 | orchestrator | Monday 02 June 2025 20:17:53 +0000 (0:00:02.438) 0:02:26.443 *********** 2025-06-02 20:19:40.473761 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:40.473766 | orchestrator | 2025-06-02 20:19:40.473774 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:19:40.473778 | orchestrator | Monday 02 June 2025 20:18:37 +0000 (0:00:44.474) 0:03:10.917 *********** 2025-06-02 20:19:40.473783 | orchestrator | 2025-06-02 20:19:40.473787 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:19:40.473791 | orchestrator | Monday 02 June 2025 20:18:37 +0000 (0:00:00.201) 0:03:11.119 *********** 2025-06-02 20:19:40.473796 | orchestrator | 2025-06-02 20:19:40.473803 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:19:40.473807 | orchestrator | Monday 02 June 2025 20:18:38 +0000 (0:00:00.486) 0:03:11.606 *********** 2025-06-02 20:19:40.473812 | orchestrator | 2025-06-02 20:19:40.473816 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:19:40.473821 | orchestrator | Monday 02 June 2025 20:18:38 +0000 (0:00:00.079) 0:03:11.685 *********** 2025-06-02 20:19:40.473825 | orchestrator | 2025-06-02 20:19:40.473829 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:19:40.473834 | orchestrator | Monday 02 June 2025 20:18:38 +0000 (0:00:00.064) 0:03:11.749 *********** 2025-06-02 20:19:40.473838 | orchestrator | 2025-06-02 20:19:40.473843 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:19:40.473847 | orchestrator | Monday 02 June 2025 20:18:38 +0000 (0:00:00.071) 0:03:11.821 *********** 2025-06-02 20:19:40.473851 | orchestrator | 2025-06-02 20:19:40.473855 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-02 20:19:40.473860 | orchestrator | Monday 02 June 2025 20:18:38 +0000 (0:00:00.066) 0:03:11.888 *********** 2025-06-02 20:19:40.473864 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:40.473868 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:40.473872 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:40.473877 | orchestrator | 2025-06-02 20:19:40.473881 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-02 20:19:40.473885 | orchestrator | Monday 02 June 2025 20:19:11 +0000 (0:00:32.625) 0:03:44.513 *********** 2025-06-02 20:19:40.473894 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:19:40.473898 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:19:40.473903 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:19:40.473907 | orchestrator | 2025-06-02 20:19:40.473911 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:19:40.473916 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 20:19:40.473921 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 20:19:40.473926 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 20:19:40.473930 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 20:19:40.473935 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 20:19:40.473939 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 20:19:40.473944 | orchestrator | 2025-06-02 20:19:40.473948 | orchestrator | 2025-06-02 20:19:40.473953 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:19:40.473957 | orchestrator | Monday 02 June 2025 20:19:38 +0000 (0:00:27.213) 0:04:11.727 *********** 2025-06-02 20:19:40.473961 | orchestrator | =============================================================================== 2025-06-02 20:19:40.473966 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.47s 2025-06-02 20:19:40.473970 | orchestrator | neutron : Restart neutron-server container ----------------------------- 32.63s 2025-06-02 20:19:40.473975 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 27.21s 2025-06-02 20:19:40.473980 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.79s 2025-06-02 20:19:40.473984 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.79s 2025-06-02 20:19:40.473988 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.23s 2025-06-02 20:19:40.473992 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.84s 2025-06-02 20:19:40.473997 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.08s 2025-06-02 20:19:40.474001 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 4.06s 2025-06-02 20:19:40.474005 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.01s 2025-06-02 20:19:40.474010 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.84s 2025-06-02 20:19:40.474058 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.76s 2025-06-02 20:19:40.474065 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.50s 2025-06-02 20:19:40.474076 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.47s 2025-06-02 20:19:40.474088 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.46s 2025-06-02 20:19:40.474093 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.25s 2025-06-02 20:19:40.474104 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.22s 2025-06-02 20:19:40.474109 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.17s 2025-06-02 20:19:40.474114 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.13s 2025-06-02 20:19:40.474119 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.01s 2025-06-02 20:19:40.488653 | orchestrator | 2025-06-02 20:19:40 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:40.488840 | orchestrator | 2025-06-02 20:19:40 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:40.489511 | orchestrator | 2025-06-02 20:19:40 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:19:40.489542 | orchestrator | 2025-06-02 20:19:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:43.517188 | orchestrator | 2025-06-02 20:19:43 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:43.519526 | orchestrator | 2025-06-02 20:19:43 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:19:43.520930 | orchestrator | 2025-06-02 20:19:43 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:43.524229 | orchestrator | 2025-06-02 20:19:43 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:19:43.524346 | orchestrator | 2025-06-02 20:19:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:46.561511 | orchestrator | 2025-06-02 20:19:46 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:46.563563 | orchestrator | 2025-06-02 20:19:46 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:19:46.565215 | orchestrator | 2025-06-02 20:19:46 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:46.567537 | orchestrator | 2025-06-02 20:19:46 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:19:46.567677 | orchestrator | 2025-06-02 20:19:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:49.615992 | orchestrator | 2025-06-02 20:19:49 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:49.618365 | orchestrator | 2025-06-02 20:19:49 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:19:49.620810 | orchestrator | 2025-06-02 20:19:49 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:49.622293 | orchestrator | 2025-06-02 20:19:49 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:19:49.622342 | orchestrator | 2025-06-02 20:19:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:52.653930 | orchestrator | 2025-06-02 20:19:52 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:52.654635 | orchestrator | 2025-06-02 20:19:52 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:19:52.655390 | orchestrator | 2025-06-02 20:19:52 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:52.656020 | orchestrator | 2025-06-02 20:19:52 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:19:52.656048 | orchestrator | 2025-06-02 20:19:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:55.697744 | orchestrator | 2025-06-02 20:19:55 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:55.699209 | orchestrator | 2025-06-02 20:19:55 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:19:55.700719 | orchestrator | 2025-06-02 20:19:55 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state STARTED 2025-06-02 20:19:55.702786 | orchestrator | 2025-06-02 20:19:55 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:19:55.702829 | orchestrator | 2025-06-02 20:19:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:58.747087 | orchestrator | 2025-06-02 20:19:58 | INFO  | Task f6959b8c-2df7-459c-a99f-18e8031b0fea is in state STARTED 2025-06-02 20:19:58.749521 | orchestrator | 2025-06-02 20:19:58 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:19:58.751013 | orchestrator | 2025-06-02 20:19:58 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:19:58.754967 | orchestrator | 2025-06-02 20:19:58 | INFO  | Task 1eddc67d-66b1-49cc-b740-6f977bc8fefc is in state SUCCESS 2025-06-02 20:19:58.756681 | orchestrator | 2025-06-02 20:19:58.756748 | orchestrator | 2025-06-02 20:19:58.756765 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:19:58.756776 | orchestrator | 2025-06-02 20:19:58.756784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:19:58.756793 | orchestrator | Monday 02 June 2025 20:17:02 +0000 (0:00:00.262) 0:00:00.262 *********** 2025-06-02 20:19:58.756799 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:58.756805 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:58.756810 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:58.756815 | orchestrator | 2025-06-02 20:19:58.756820 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:19:58.756825 | orchestrator | Monday 02 June 2025 20:17:02 +0000 (0:00:00.213) 0:00:00.475 *********** 2025-06-02 20:19:58.756830 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-02 20:19:58.756835 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-02 20:19:58.756840 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-02 20:19:58.756845 | orchestrator | 2025-06-02 20:19:58.756849 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-02 20:19:58.756854 | orchestrator | 2025-06-02 20:19:58.756859 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 20:19:58.756863 | orchestrator | Monday 02 June 2025 20:17:02 +0000 (0:00:00.301) 0:00:00.777 *********** 2025-06-02 20:19:58.756868 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:19:58.756874 | orchestrator | 2025-06-02 20:19:58.756878 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-02 20:19:58.756883 | orchestrator | Monday 02 June 2025 20:17:03 +0000 (0:00:00.444) 0:00:01.221 *********** 2025-06-02 20:19:58.756887 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-02 20:19:58.756892 | orchestrator | 2025-06-02 20:19:58.756897 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-02 20:19:58.756901 | orchestrator | Monday 02 June 2025 20:17:06 +0000 (0:00:03.485) 0:00:04.707 *********** 2025-06-02 20:19:58.756906 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-02 20:19:58.756911 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-02 20:19:58.756916 | orchestrator | 2025-06-02 20:19:58.756920 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-02 20:19:58.756925 | orchestrator | Monday 02 June 2025 20:17:13 +0000 (0:00:06.498) 0:00:11.205 *********** 2025-06-02 20:19:58.756929 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:19:58.756934 | orchestrator | 2025-06-02 20:19:58.756939 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-02 20:19:58.756943 | orchestrator | Monday 02 June 2025 20:17:16 +0000 (0:00:03.244) 0:00:14.450 *********** 2025-06-02 20:19:58.756948 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:19:58.756953 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-02 20:19:58.756957 | orchestrator | 2025-06-02 20:19:58.756962 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-02 20:19:58.756966 | orchestrator | Monday 02 June 2025 20:17:20 +0000 (0:00:04.089) 0:00:18.539 *********** 2025-06-02 20:19:58.756991 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:19:58.756996 | orchestrator | 2025-06-02 20:19:58.757000 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-02 20:19:58.757005 | orchestrator | Monday 02 June 2025 20:17:24 +0000 (0:00:03.659) 0:00:22.198 *********** 2025-06-02 20:19:58.757009 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-02 20:19:58.757014 | orchestrator | 2025-06-02 20:19:58.757020 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-02 20:19:58.757027 | orchestrator | Monday 02 June 2025 20:17:28 +0000 (0:00:04.362) 0:00:26.561 *********** 2025-06-02 20:19:58.757037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.757071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.757080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.757091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757363 | orchestrator | 2025-06-02 20:19:58.757369 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-02 20:19:58.757376 | orchestrator | Monday 02 June 2025 20:17:31 +0000 (0:00:03.210) 0:00:29.772 *********** 2025-06-02 20:19:58.757383 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:58.757390 | orchestrator | 2025-06-02 20:19:58.757399 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-02 20:19:58.757409 | orchestrator | Monday 02 June 2025 20:17:32 +0000 (0:00:00.231) 0:00:30.003 *********** 2025-06-02 20:19:58.757417 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:58.757423 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:58.757430 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:58.757437 | orchestrator | 2025-06-02 20:19:58.757445 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 20:19:58.757452 | orchestrator | Monday 02 June 2025 20:17:32 +0000 (0:00:00.388) 0:00:30.392 *********** 2025-06-02 20:19:58.757459 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:19:58.757466 | orchestrator | 2025-06-02 20:19:58.757474 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-02 20:19:58.757481 | orchestrator | Monday 02 June 2025 20:17:33 +0000 (0:00:00.900) 0:00:31.293 *********** 2025-06-02 20:19:58.757489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.757506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.757514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.757529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.757678 | orchestrator | 2025-06-02 20:19:58.757686 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-02 20:19:58.757694 | orchestrator | Monday 02 June 2025 20:17:40 +0000 (0:00:06.814) 0:00:38.107 *********** 2025-06-02 20:19:58.757702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.757719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:19:58.757733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.757746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.757766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.757774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.757783 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:58.757790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.757799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:19:58.758077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758115 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:58.758120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.758125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:19:58.758139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758169 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:58.758178 | orchestrator | 2025-06-02 20:19:58.758188 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-02 20:19:58.758197 | orchestrator | Monday 02 June 2025 20:17:41 +0000 (0:00:01.419) 0:00:39.527 *********** 2025-06-02 20:19:58.758204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.758212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:19:58.758247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758278 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:58.758285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.758293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:19:58.758313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758343 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:58.758351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.758359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:19:58.758372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.758408 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:58.758412 | orchestrator | 2025-06-02 20:19:58.758417 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-02 20:19:58.758421 | orchestrator | Monday 02 June 2025 20:17:43 +0000 (0:00:01.957) 0:00:41.484 *********** 2025-06-02 20:19:58.758426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.758431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.758448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.758453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.758546 | orchestrator | 2025-06-02 20:19:58.758551 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-02 20:19:58.758556 | orchestrator | Monday 02 June 2025 20:17:50 +0000 (0:00:07.107) 0:00:48.592 *********** 2025-06-02 20:19:58.758561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.758566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.758574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.759082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759207 | orchestrator | 2025-06-02 20:19:58.759212 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-02 20:19:58.759217 | orchestrator | Monday 02 June 2025 20:18:06 +0000 (0:00:16.201) 0:01:04.794 *********** 2025-06-02 20:19:58.759221 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 20:19:58.759226 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 20:19:58.759276 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 20:19:58.759284 | orchestrator | 2025-06-02 20:19:58.759292 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-02 20:19:58.759299 | orchestrator | Monday 02 June 2025 20:18:11 +0000 (0:00:04.512) 0:01:09.306 *********** 2025-06-02 20:19:58.759304 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 20:19:58.759308 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 20:19:58.759313 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 20:19:58.759322 | orchestrator | 2025-06-02 20:19:58.759327 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-02 20:19:58.759331 | orchestrator | Monday 02 June 2025 20:18:14 +0000 (0:00:03.220) 0:01:12.526 *********** 2025-06-02 20:19:58.759336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.759342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.759354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.759360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759512 | orchestrator | 2025-06-02 20:19:58.759520 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-02 20:19:58.759527 | orchestrator | Monday 02 June 2025 20:18:17 +0000 (0:00:03.044) 0:01:15.571 *********** 2025-06-02 20:19:58.759534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.759541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.759549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.759564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759710 | orchestrator | 2025-06-02 20:19:58.759717 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 20:19:58.759725 | orchestrator | Monday 02 June 2025 20:18:20 +0000 (0:00:02.738) 0:01:18.309 *********** 2025-06-02 20:19:58.759733 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:58.759742 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:58.759750 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:58.759758 | orchestrator | 2025-06-02 20:19:58.759764 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-02 20:19:58.759770 | orchestrator | Monday 02 June 2025 20:18:20 +0000 (0:00:00.501) 0:01:18.811 *********** 2025-06-02 20:19:58.759775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.759781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:19:58.759786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759822 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:58.759827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.759833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:19:58.759838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759873 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:58.759878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:19:58.759884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:19:58.759889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:19:58.759922 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:58.759927 | orchestrator | 2025-06-02 20:19:58.759933 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-02 20:19:58.759938 | orchestrator | Monday 02 June 2025 20:18:21 +0000 (0:00:00.957) 0:01:19.768 *********** 2025-06-02 20:19:58.759944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.759949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.759955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:19:58.759961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.759997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.760002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.760015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.760020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.760025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.760057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.760062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.760067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.760072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.760087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:19:58.760092 | orchestrator | 2025-06-02 20:19:58.760098 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 20:19:58.760105 | orchestrator | Monday 02 June 2025 20:18:26 +0000 (0:00:04.910) 0:01:24.679 *********** 2025-06-02 20:19:58.760112 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:58.760118 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:58.760129 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:58.760137 | orchestrator | 2025-06-02 20:19:58.760147 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-02 20:19:58.760154 | orchestrator | Monday 02 June 2025 20:18:27 +0000 (0:00:00.310) 0:01:24.989 *********** 2025-06-02 20:19:58.760163 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-02 20:19:58.760170 | orchestrator | 2025-06-02 20:19:58.760177 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-02 20:19:58.760184 | orchestrator | Monday 02 June 2025 20:18:30 +0000 (0:00:03.059) 0:01:28.049 *********** 2025-06-02 20:19:58.760190 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:19:58.760198 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-02 20:19:58.760205 | orchestrator | 2025-06-02 20:19:58.760212 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-02 20:19:58.760219 | orchestrator | Monday 02 June 2025 20:18:32 +0000 (0:00:02.518) 0:01:30.567 *********** 2025-06-02 20:19:58.760227 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:58.760251 | orchestrator | 2025-06-02 20:19:58.760258 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 20:19:58.760265 | orchestrator | Monday 02 June 2025 20:18:47 +0000 (0:00:14.775) 0:01:45.342 *********** 2025-06-02 20:19:58.760272 | orchestrator | 2025-06-02 20:19:58.760279 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 20:19:58.760287 | orchestrator | Monday 02 June 2025 20:18:47 +0000 (0:00:00.068) 0:01:45.411 *********** 2025-06-02 20:19:58.760295 | orchestrator | 2025-06-02 20:19:58.760301 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 20:19:58.760308 | orchestrator | Monday 02 June 2025 20:18:47 +0000 (0:00:00.068) 0:01:45.480 *********** 2025-06-02 20:19:58.760315 | orchestrator | 2025-06-02 20:19:58.760322 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-02 20:19:58.760328 | orchestrator | Monday 02 June 2025 20:18:47 +0000 (0:00:00.064) 0:01:45.544 *********** 2025-06-02 20:19:58.760334 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:58.760341 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:58.760347 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:58.760354 | orchestrator | 2025-06-02 20:19:58.760361 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-02 20:19:58.760374 | orchestrator | Monday 02 June 2025 20:19:02 +0000 (0:00:14.809) 0:02:00.354 *********** 2025-06-02 20:19:58.760381 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:58.760387 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:58.760393 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:58.760400 | orchestrator | 2025-06-02 20:19:58.760406 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-02 20:19:58.760412 | orchestrator | Monday 02 June 2025 20:19:10 +0000 (0:00:08.108) 0:02:08.462 *********** 2025-06-02 20:19:58.760419 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:58.760426 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:58.760432 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:58.760439 | orchestrator | 2025-06-02 20:19:58.760446 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-02 20:19:58.760453 | orchestrator | Monday 02 June 2025 20:19:24 +0000 (0:00:14.110) 0:02:22.573 *********** 2025-06-02 20:19:58.760460 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:58.760467 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:58.760475 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:58.760481 | orchestrator | 2025-06-02 20:19:58.760490 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-02 20:19:58.760496 | orchestrator | Monday 02 June 2025 20:19:30 +0000 (0:00:06.269) 0:02:28.842 *********** 2025-06-02 20:19:58.760505 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:58.760512 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:58.760519 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:58.760527 | orchestrator | 2025-06-02 20:19:58.760535 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-02 20:19:58.760542 | orchestrator | Monday 02 June 2025 20:19:36 +0000 (0:00:05.319) 0:02:34.162 *********** 2025-06-02 20:19:58.760547 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:58.760552 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:58.760557 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:58.760561 | orchestrator | 2025-06-02 20:19:58.760566 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-02 20:19:58.760571 | orchestrator | Monday 02 June 2025 20:19:48 +0000 (0:00:12.459) 0:02:46.621 *********** 2025-06-02 20:19:58.760576 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:58.760580 | orchestrator | 2025-06-02 20:19:58.760585 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:19:58.760590 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:19:58.760596 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:19:58.760601 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:19:58.760605 | orchestrator | 2025-06-02 20:19:58.760610 | orchestrator | 2025-06-02 20:19:58.760631 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:19:58.760642 | orchestrator | Monday 02 June 2025 20:19:56 +0000 (0:00:07.675) 0:02:54.296 *********** 2025-06-02 20:19:58.760649 | orchestrator | =============================================================================== 2025-06-02 20:19:58.760656 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.20s 2025-06-02 20:19:58.760663 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.81s 2025-06-02 20:19:58.760670 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.78s 2025-06-02 20:19:58.760677 | orchestrator | designate : Restart designate-central container ------------------------ 14.11s 2025-06-02 20:19:58.760685 | orchestrator | designate : Restart designate-worker container ------------------------- 12.46s 2025-06-02 20:19:58.760692 | orchestrator | designate : Restart designate-api container ----------------------------- 8.11s 2025-06-02 20:19:58.760705 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.68s 2025-06-02 20:19:58.760712 | orchestrator | designate : Copying over config.json files for services ----------------- 7.11s 2025-06-02 20:19:58.760719 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.81s 2025-06-02 20:19:58.760726 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.50s 2025-06-02 20:19:58.760733 | orchestrator | designate : Restart designate-producer container ------------------------ 6.27s 2025-06-02 20:19:58.760740 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.32s 2025-06-02 20:19:58.760748 | orchestrator | designate : Check designate containers ---------------------------------- 4.91s 2025-06-02 20:19:58.760755 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.51s 2025-06-02 20:19:58.760763 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.36s 2025-06-02 20:19:58.760771 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.09s 2025-06-02 20:19:58.760779 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.66s 2025-06-02 20:19:58.760784 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.49s 2025-06-02 20:19:58.760789 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.24s 2025-06-02 20:19:58.760794 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.22s 2025-06-02 20:19:58.760798 | orchestrator | 2025-06-02 20:19:58 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:19:58.760803 | orchestrator | 2025-06-02 20:19:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:01.802415 | orchestrator | 2025-06-02 20:20:01 | INFO  | Task f6959b8c-2df7-459c-a99f-18e8031b0fea is in state STARTED 2025-06-02 20:20:01.802500 | orchestrator | 2025-06-02 20:20:01 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:01.806265 | orchestrator | 2025-06-02 20:20:01 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:01.806925 | orchestrator | 2025-06-02 20:20:01 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:01.806963 | orchestrator | 2025-06-02 20:20:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:04.833291 | orchestrator | 2025-06-02 20:20:04 | INFO  | Task f6959b8c-2df7-459c-a99f-18e8031b0fea is in state SUCCESS 2025-06-02 20:20:04.833397 | orchestrator | 2025-06-02 20:20:04 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:04.833880 | orchestrator | 2025-06-02 20:20:04 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:04.834674 | orchestrator | 2025-06-02 20:20:04 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:04.835343 | orchestrator | 2025-06-02 20:20:04 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:04.835364 | orchestrator | 2025-06-02 20:20:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:07.867873 | orchestrator | 2025-06-02 20:20:07 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:07.867973 | orchestrator | 2025-06-02 20:20:07 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:07.868508 | orchestrator | 2025-06-02 20:20:07 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:07.869823 | orchestrator | 2025-06-02 20:20:07 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:07.869864 | orchestrator | 2025-06-02 20:20:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:10.903949 | orchestrator | 2025-06-02 20:20:10 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:10.905424 | orchestrator | 2025-06-02 20:20:10 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:10.906986 | orchestrator | 2025-06-02 20:20:10 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:10.908356 | orchestrator | 2025-06-02 20:20:10 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:10.908392 | orchestrator | 2025-06-02 20:20:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:13.947772 | orchestrator | 2025-06-02 20:20:13 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:13.949567 | orchestrator | 2025-06-02 20:20:13 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:13.951981 | orchestrator | 2025-06-02 20:20:13 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:13.953465 | orchestrator | 2025-06-02 20:20:13 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:13.953702 | orchestrator | 2025-06-02 20:20:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:17.002262 | orchestrator | 2025-06-02 20:20:16 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:17.002949 | orchestrator | 2025-06-02 20:20:17 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:17.004577 | orchestrator | 2025-06-02 20:20:17 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:17.006670 | orchestrator | 2025-06-02 20:20:17 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:17.006764 | orchestrator | 2025-06-02 20:20:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:20.039525 | orchestrator | 2025-06-02 20:20:20 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:20.042115 | orchestrator | 2025-06-02 20:20:20 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:20.046854 | orchestrator | 2025-06-02 20:20:20 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:20.049092 | orchestrator | 2025-06-02 20:20:20 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:20.049659 | orchestrator | 2025-06-02 20:20:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:23.102835 | orchestrator | 2025-06-02 20:20:23 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:23.102920 | orchestrator | 2025-06-02 20:20:23 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:23.102932 | orchestrator | 2025-06-02 20:20:23 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:23.103453 | orchestrator | 2025-06-02 20:20:23 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:23.103473 | orchestrator | 2025-06-02 20:20:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:26.145001 | orchestrator | 2025-06-02 20:20:26 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:26.146771 | orchestrator | 2025-06-02 20:20:26 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:26.148185 | orchestrator | 2025-06-02 20:20:26 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:26.149374 | orchestrator | 2025-06-02 20:20:26 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:26.149422 | orchestrator | 2025-06-02 20:20:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:29.193787 | orchestrator | 2025-06-02 20:20:29 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:29.195795 | orchestrator | 2025-06-02 20:20:29 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:29.198343 | orchestrator | 2025-06-02 20:20:29 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:29.200027 | orchestrator | 2025-06-02 20:20:29 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:29.200072 | orchestrator | 2025-06-02 20:20:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:32.244564 | orchestrator | 2025-06-02 20:20:32 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:32.244703 | orchestrator | 2025-06-02 20:20:32 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:32.245451 | orchestrator | 2025-06-02 20:20:32 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:32.246506 | orchestrator | 2025-06-02 20:20:32 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:32.246558 | orchestrator | 2025-06-02 20:20:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:35.274874 | orchestrator | 2025-06-02 20:20:35 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:35.274951 | orchestrator | 2025-06-02 20:20:35 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:35.276006 | orchestrator | 2025-06-02 20:20:35 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:35.277150 | orchestrator | 2025-06-02 20:20:35 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:35.277167 | orchestrator | 2025-06-02 20:20:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:38.323141 | orchestrator | 2025-06-02 20:20:38 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:38.325389 | orchestrator | 2025-06-02 20:20:38 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:38.326763 | orchestrator | 2025-06-02 20:20:38 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:38.328509 | orchestrator | 2025-06-02 20:20:38 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:38.329037 | orchestrator | 2025-06-02 20:20:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:41.374773 | orchestrator | 2025-06-02 20:20:41 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:41.375518 | orchestrator | 2025-06-02 20:20:41 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:41.376504 | orchestrator | 2025-06-02 20:20:41 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:41.378151 | orchestrator | 2025-06-02 20:20:41 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:41.378230 | orchestrator | 2025-06-02 20:20:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:44.415625 | orchestrator | 2025-06-02 20:20:44 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:44.415807 | orchestrator | 2025-06-02 20:20:44 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:44.418574 | orchestrator | 2025-06-02 20:20:44 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:44.418692 | orchestrator | 2025-06-02 20:20:44 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:44.418704 | orchestrator | 2025-06-02 20:20:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:47.470753 | orchestrator | 2025-06-02 20:20:47 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:47.472558 | orchestrator | 2025-06-02 20:20:47 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:47.473956 | orchestrator | 2025-06-02 20:20:47 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:47.478960 | orchestrator | 2025-06-02 20:20:47 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:47.479013 | orchestrator | 2025-06-02 20:20:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:50.525522 | orchestrator | 2025-06-02 20:20:50 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:50.529278 | orchestrator | 2025-06-02 20:20:50 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:50.530463 | orchestrator | 2025-06-02 20:20:50 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:50.532716 | orchestrator | 2025-06-02 20:20:50 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:50.532778 | orchestrator | 2025-06-02 20:20:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:53.579621 | orchestrator | 2025-06-02 20:20:53 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:53.581526 | orchestrator | 2025-06-02 20:20:53 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:53.584055 | orchestrator | 2025-06-02 20:20:53 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:53.587009 | orchestrator | 2025-06-02 20:20:53 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:53.587085 | orchestrator | 2025-06-02 20:20:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:56.632612 | orchestrator | 2025-06-02 20:20:56 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:56.634645 | orchestrator | 2025-06-02 20:20:56 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:56.636544 | orchestrator | 2025-06-02 20:20:56 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:56.638446 | orchestrator | 2025-06-02 20:20:56 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:56.638492 | orchestrator | 2025-06-02 20:20:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:59.687330 | orchestrator | 2025-06-02 20:20:59 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:20:59.689093 | orchestrator | 2025-06-02 20:20:59 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:20:59.691969 | orchestrator | 2025-06-02 20:20:59 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:20:59.695121 | orchestrator | 2025-06-02 20:20:59 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:20:59.695238 | orchestrator | 2025-06-02 20:20:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:02.742454 | orchestrator | 2025-06-02 20:21:02 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:02.744508 | orchestrator | 2025-06-02 20:21:02 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:02.745563 | orchestrator | 2025-06-02 20:21:02 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:02.747720 | orchestrator | 2025-06-02 20:21:02 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:02.747774 | orchestrator | 2025-06-02 20:21:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:05.795075 | orchestrator | 2025-06-02 20:21:05 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:05.798460 | orchestrator | 2025-06-02 20:21:05 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:05.803104 | orchestrator | 2025-06-02 20:21:05 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:05.805497 | orchestrator | 2025-06-02 20:21:05 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:05.805556 | orchestrator | 2025-06-02 20:21:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:08.849915 | orchestrator | 2025-06-02 20:21:08 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:08.854619 | orchestrator | 2025-06-02 20:21:08 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:08.854720 | orchestrator | 2025-06-02 20:21:08 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:08.854738 | orchestrator | 2025-06-02 20:21:08 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:08.854752 | orchestrator | 2025-06-02 20:21:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:11.893842 | orchestrator | 2025-06-02 20:21:11 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:11.894644 | orchestrator | 2025-06-02 20:21:11 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:11.895341 | orchestrator | 2025-06-02 20:21:11 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:11.896544 | orchestrator | 2025-06-02 20:21:11 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:11.896571 | orchestrator | 2025-06-02 20:21:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:14.930486 | orchestrator | 2025-06-02 20:21:14 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:14.930981 | orchestrator | 2025-06-02 20:21:14 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:14.931608 | orchestrator | 2025-06-02 20:21:14 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:14.932749 | orchestrator | 2025-06-02 20:21:14 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:14.932802 | orchestrator | 2025-06-02 20:21:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:17.983807 | orchestrator | 2025-06-02 20:21:17 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:17.985557 | orchestrator | 2025-06-02 20:21:17 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:17.987080 | orchestrator | 2025-06-02 20:21:17 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:17.988826 | orchestrator | 2025-06-02 20:21:17 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:17.989103 | orchestrator | 2025-06-02 20:21:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:21.040028 | orchestrator | 2025-06-02 20:21:21 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:21.042219 | orchestrator | 2025-06-02 20:21:21 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:21.043812 | orchestrator | 2025-06-02 20:21:21 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:21.045422 | orchestrator | 2025-06-02 20:21:21 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:21.045463 | orchestrator | 2025-06-02 20:21:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:24.089279 | orchestrator | 2025-06-02 20:21:24 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:24.089636 | orchestrator | 2025-06-02 20:21:24 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:24.090562 | orchestrator | 2025-06-02 20:21:24 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:24.092214 | orchestrator | 2025-06-02 20:21:24 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:24.092260 | orchestrator | 2025-06-02 20:21:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:27.139241 | orchestrator | 2025-06-02 20:21:27 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:27.141221 | orchestrator | 2025-06-02 20:21:27 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:27.141798 | orchestrator | 2025-06-02 20:21:27 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:27.143024 | orchestrator | 2025-06-02 20:21:27 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:27.143056 | orchestrator | 2025-06-02 20:21:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:30.188577 | orchestrator | 2025-06-02 20:21:30 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:30.189808 | orchestrator | 2025-06-02 20:21:30 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:30.191284 | orchestrator | 2025-06-02 20:21:30 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:30.194527 | orchestrator | 2025-06-02 20:21:30 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:30.194608 | orchestrator | 2025-06-02 20:21:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:33.238275 | orchestrator | 2025-06-02 20:21:33 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:33.240282 | orchestrator | 2025-06-02 20:21:33 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:33.241191 | orchestrator | 2025-06-02 20:21:33 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:33.243372 | orchestrator | 2025-06-02 20:21:33 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:33.243411 | orchestrator | 2025-06-02 20:21:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:36.273119 | orchestrator | 2025-06-02 20:21:36 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:36.275825 | orchestrator | 2025-06-02 20:21:36 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:36.276353 | orchestrator | 2025-06-02 20:21:36 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:36.276851 | orchestrator | 2025-06-02 20:21:36 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:36.277078 | orchestrator | 2025-06-02 20:21:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:39.309561 | orchestrator | 2025-06-02 20:21:39 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:39.309990 | orchestrator | 2025-06-02 20:21:39 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:39.311000 | orchestrator | 2025-06-02 20:21:39 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:39.311548 | orchestrator | 2025-06-02 20:21:39 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:39.311679 | orchestrator | 2025-06-02 20:21:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:42.342789 | orchestrator | 2025-06-02 20:21:42 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:42.343105 | orchestrator | 2025-06-02 20:21:42 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:42.343778 | orchestrator | 2025-06-02 20:21:42 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:42.345757 | orchestrator | 2025-06-02 20:21:42 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state STARTED 2025-06-02 20:21:42.345812 | orchestrator | 2025-06-02 20:21:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:45.392985 | orchestrator | 2025-06-02 20:21:45 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:45.394432 | orchestrator | 2025-06-02 20:21:45 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:45.397283 | orchestrator | 2025-06-02 20:21:45 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:45.400527 | orchestrator | 2025-06-02 20:21:45 | INFO  | Task 097885e0-864f-4e56-8d16-a20bd23cb076 is in state SUCCESS 2025-06-02 20:21:45.402365 | orchestrator | 2025-06-02 20:21:45.402425 | orchestrator | 2025-06-02 20:21:45.402434 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:21:45.402443 | orchestrator | 2025-06-02 20:21:45.402449 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:21:45.402456 | orchestrator | Monday 02 June 2025 20:20:00 +0000 (0:00:00.184) 0:00:00.184 *********** 2025-06-02 20:21:45.402463 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:21:45.402471 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:21:45.402477 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:21:45.402483 | orchestrator | 2025-06-02 20:21:45.402491 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:21:45.402526 | orchestrator | Monday 02 June 2025 20:20:01 +0000 (0:00:00.382) 0:00:00.566 *********** 2025-06-02 20:21:45.402534 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 20:21:45.402541 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 20:21:45.402548 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 20:21:45.402554 | orchestrator | 2025-06-02 20:21:45.402561 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-02 20:21:45.402566 | orchestrator | 2025-06-02 20:21:45.402573 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-02 20:21:45.402579 | orchestrator | Monday 02 June 2025 20:20:01 +0000 (0:00:00.680) 0:00:01.246 *********** 2025-06-02 20:21:45.402586 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:21:45.402664 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:21:45.402753 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:21:45.402764 | orchestrator | 2025-06-02 20:21:45.402771 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:21:45.402779 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:21:45.402789 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:21:45.402796 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:21:45.402802 | orchestrator | 2025-06-02 20:21:45.402808 | orchestrator | 2025-06-02 20:21:45.402814 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:21:45.402821 | orchestrator | Monday 02 June 2025 20:20:02 +0000 (0:00:00.851) 0:00:02.098 *********** 2025-06-02 20:21:45.402828 | orchestrator | =============================================================================== 2025-06-02 20:21:45.402835 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.85s 2025-06-02 20:21:45.402841 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-06-02 20:21:45.402848 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-06-02 20:21:45.402872 | orchestrator | 2025-06-02 20:21:45.402879 | orchestrator | 2025-06-02 20:21:45.402885 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:21:45.402891 | orchestrator | 2025-06-02 20:21:45.402898 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:21:45.402904 | orchestrator | Monday 02 June 2025 20:19:42 +0000 (0:00:00.234) 0:00:00.234 *********** 2025-06-02 20:21:45.402911 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:21:45.402917 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:21:45.402923 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:21:45.402929 | orchestrator | 2025-06-02 20:21:45.402948 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:21:45.402954 | orchestrator | Monday 02 June 2025 20:19:43 +0000 (0:00:00.272) 0:00:00.506 *********** 2025-06-02 20:21:45.402960 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-02 20:21:45.402966 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-02 20:21:45.402972 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-02 20:21:45.402978 | orchestrator | 2025-06-02 20:21:45.402984 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-02 20:21:45.402990 | orchestrator | 2025-06-02 20:21:45.402996 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 20:21:45.403003 | orchestrator | Monday 02 June 2025 20:19:43 +0000 (0:00:00.354) 0:00:00.860 *********** 2025-06-02 20:21:45.403009 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:21:45.403015 | orchestrator | 2025-06-02 20:21:45.403021 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-02 20:21:45.403027 | orchestrator | Monday 02 June 2025 20:19:44 +0000 (0:00:00.498) 0:00:01.359 *********** 2025-06-02 20:21:45.403035 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-02 20:21:45.403041 | orchestrator | 2025-06-02 20:21:45.403047 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-02 20:21:45.403054 | orchestrator | Monday 02 June 2025 20:19:47 +0000 (0:00:03.868) 0:00:05.227 *********** 2025-06-02 20:21:45.403060 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-02 20:21:45.403066 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-02 20:21:45.403072 | orchestrator | 2025-06-02 20:21:45.403099 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-02 20:21:45.403114 | orchestrator | Monday 02 June 2025 20:19:55 +0000 (0:00:07.130) 0:00:12.358 *********** 2025-06-02 20:21:45.403144 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:21:45.403150 | orchestrator | 2025-06-02 20:21:45.403156 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-02 20:21:45.403162 | orchestrator | Monday 02 June 2025 20:19:58 +0000 (0:00:03.385) 0:00:15.744 *********** 2025-06-02 20:21:45.403183 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:21:45.403190 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-02 20:21:45.403196 | orchestrator | 2025-06-02 20:21:45.403202 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-02 20:21:45.403208 | orchestrator | Monday 02 June 2025 20:20:02 +0000 (0:00:04.055) 0:00:19.799 *********** 2025-06-02 20:21:45.403214 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:21:45.403221 | orchestrator | 2025-06-02 20:21:45.403227 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-02 20:21:45.403233 | orchestrator | Monday 02 June 2025 20:20:05 +0000 (0:00:03.461) 0:00:23.261 *********** 2025-06-02 20:21:45.403239 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-02 20:21:45.403245 | orchestrator | 2025-06-02 20:21:45.403252 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-02 20:21:45.403258 | orchestrator | Monday 02 June 2025 20:20:10 +0000 (0:00:04.113) 0:00:27.374 *********** 2025-06-02 20:21:45.403264 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:45.403270 | orchestrator | 2025-06-02 20:21:45.403276 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-02 20:21:45.403282 | orchestrator | Monday 02 June 2025 20:20:13 +0000 (0:00:03.250) 0:00:30.624 *********** 2025-06-02 20:21:45.403289 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:45.403295 | orchestrator | 2025-06-02 20:21:45.403301 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-02 20:21:45.403307 | orchestrator | Monday 02 June 2025 20:20:17 +0000 (0:00:04.002) 0:00:34.626 *********** 2025-06-02 20:21:45.403313 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:45.403320 | orchestrator | 2025-06-02 20:21:45.403327 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-02 20:21:45.403333 | orchestrator | Monday 02 June 2025 20:20:21 +0000 (0:00:03.749) 0:00:38.376 *********** 2025-06-02 20:21:45.403343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.403357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.403370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.403383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.403392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.403399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.403405 | orchestrator | 2025-06-02 20:21:45.403411 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-02 20:21:45.403417 | orchestrator | Monday 02 June 2025 20:20:22 +0000 (0:00:01.402) 0:00:39.778 *********** 2025-06-02 20:21:45.403424 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:45.403430 | orchestrator | 2025-06-02 20:21:45.403436 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-02 20:21:45.403447 | orchestrator | Monday 02 June 2025 20:20:22 +0000 (0:00:00.132) 0:00:39.911 *********** 2025-06-02 20:21:45.403459 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:45.403466 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:45.403472 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:45.403479 | orchestrator | 2025-06-02 20:21:45.403485 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-02 20:21:45.403492 | orchestrator | Monday 02 June 2025 20:20:23 +0000 (0:00:00.492) 0:00:40.403 *********** 2025-06-02 20:21:45.403498 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:21:45.403505 | orchestrator | 2025-06-02 20:21:45.403511 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-02 20:21:45.403518 | orchestrator | Monday 02 June 2025 20:20:23 +0000 (0:00:00.825) 0:00:41.229 *********** 2025-06-02 20:21:45.403525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.403537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.403544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.403551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.403565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.403573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.403579 | orchestrator | 2025-06-02 20:21:45.403585 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-02 20:21:45.403591 | orchestrator | Monday 02 June 2025 20:20:26 +0000 (0:00:02.386) 0:00:43.615 *********** 2025-06-02 20:21:45.403598 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:21:45.403604 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:21:45.403610 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:21:45.403617 | orchestrator | 2025-06-02 20:21:45.403623 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 20:21:45.403633 | orchestrator | Monday 02 June 2025 20:20:26 +0000 (0:00:00.312) 0:00:43.928 *********** 2025-06-02 20:21:45.403640 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:21:45.403647 | orchestrator | 2025-06-02 20:21:45.403653 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-02 20:21:45.403660 | orchestrator | Monday 02 June 2025 20:20:27 +0000 (0:00:00.724) 0:00:44.653 *********** 2025-06-02 20:21:45.403667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.403674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.403689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.403696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.403708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.403714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.403721 | orchestrator | 2025-06-02 20:21:45.403729 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-02 20:21:45.403736 | orchestrator | Monday 02 June 2025 20:20:29 +0000 (0:00:02.414) 0:00:47.067 *********** 2025-06-02 20:21:45.403748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:21:45.403758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:21:45.403766 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:45.403773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:21:45.403785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:21:45.403792 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:45.403798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:21:45.403810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:21:45.403817 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:45.403823 | orchestrator | 2025-06-02 20:21:45.403830 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-02 20:21:45.403844 | orchestrator | Monday 02 June 2025 20:20:30 +0000 (0:00:00.670) 0:00:47.737 *********** 2025-06-02 20:21:45.403851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:21:45.403858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:21:45.403865 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:45.403875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:21:45.403887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:21:45.403894 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:45.403903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:21:45.403910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:21:45.403917 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:45.403923 | orchestrator | 2025-06-02 20:21:45.403930 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-02 20:21:45.403937 | orchestrator | Monday 02 June 2025 20:20:31 +0000 (0:00:01.347) 0:00:49.084 *********** 2025-06-02 20:21:45.404136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.404149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.404161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.404172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.404180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.404192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.404200 | orchestrator | 2025-06-02 20:21:45.404207 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-02 20:21:45.404214 | orchestrator | Monday 02 June 2025 20:20:34 +0000 (0:00:02.461) 0:00:51.545 *********** 2025-06-02 20:21:45.404226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.404233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.404244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.404250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.404263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.404275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.404282 | orchestrator | 2025-06-02 20:21:45.404289 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-02 20:21:45.404295 | orchestrator | Monday 02 June 2025 20:20:39 +0000 (0:00:05.297) 0:00:56.843 *********** 2025-06-02 20:21:45.404301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:21:45.404312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:21:45.404319 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:45.404326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:21:45.404337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:21:45.404348 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:45.404355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:21:45.404362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:21:45.404369 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:45.404376 | orchestrator | 2025-06-02 20:21:45.404382 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-02 20:21:45.404392 | orchestrator | Monday 02 June 2025 20:20:40 +0000 (0:00:00.795) 0:00:57.638 *********** 2025-06-02 20:21:45.404399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.404410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.404422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:21:45.404428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.404439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.404445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:21:45.404452 | orchestrator | 2025-06-02 20:21:45.404459 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 20:21:45.404466 | orchestrator | Monday 02 June 2025 20:20:42 +0000 (0:00:02.046) 0:00:59.685 *********** 2025-06-02 20:21:45.404473 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:45.404479 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:45.404486 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:45.404498 | orchestrator | 2025-06-02 20:21:45.404504 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-02 20:21:45.404512 | orchestrator | Monday 02 June 2025 20:20:42 +0000 (0:00:00.303) 0:00:59.989 *********** 2025-06-02 20:21:45.404519 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:45.404526 | orchestrator | 2025-06-02 20:21:45.404533 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-02 20:21:45.404540 | orchestrator | Monday 02 June 2025 20:20:45 +0000 (0:00:02.289) 0:01:02.278 *********** 2025-06-02 20:21:45.404548 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:45.404555 | orchestrator | 2025-06-02 20:21:45.404562 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-02 20:21:45.404569 | orchestrator | Monday 02 June 2025 20:20:47 +0000 (0:00:02.475) 0:01:04.754 *********** 2025-06-02 20:21:45.404579 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:45.404586 | orchestrator | 2025-06-02 20:21:45.404593 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 20:21:45.404599 | orchestrator | Monday 02 June 2025 20:21:07 +0000 (0:00:20.235) 0:01:24.989 *********** 2025-06-02 20:21:45.404606 | orchestrator | 2025-06-02 20:21:45.404613 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 20:21:45.404620 | orchestrator | Monday 02 June 2025 20:21:07 +0000 (0:00:00.063) 0:01:25.053 *********** 2025-06-02 20:21:45.404627 | orchestrator | 2025-06-02 20:21:45.404633 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 20:21:45.404640 | orchestrator | Monday 02 June 2025 20:21:07 +0000 (0:00:00.077) 0:01:25.130 *********** 2025-06-02 20:21:45.404647 | orchestrator | 2025-06-02 20:21:45.404654 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-02 20:21:45.404660 | orchestrator | Monday 02 June 2025 20:21:07 +0000 (0:00:00.083) 0:01:25.213 *********** 2025-06-02 20:21:45.404666 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:45.404674 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:21:45.404681 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:21:45.404688 | orchestrator | 2025-06-02 20:21:45.404694 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-02 20:21:45.404699 | orchestrator | Monday 02 June 2025 20:21:29 +0000 (0:00:21.726) 0:01:46.940 *********** 2025-06-02 20:21:45.404705 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:45.404711 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:21:45.404717 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:21:45.404724 | orchestrator | 2025-06-02 20:21:45.404732 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:21:45.404743 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:21:45.404751 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:21:45.404759 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:21:45.404768 | orchestrator | 2025-06-02 20:21:45.404775 | orchestrator | 2025-06-02 20:21:45.404784 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:21:45.404792 | orchestrator | Monday 02 June 2025 20:21:42 +0000 (0:00:12.443) 0:01:59.384 *********** 2025-06-02 20:21:45.404800 | orchestrator | =============================================================================== 2025-06-02 20:21:45.404809 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.73s 2025-06-02 20:21:45.404818 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 20.24s 2025-06-02 20:21:45.404826 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.44s 2025-06-02 20:21:45.404834 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.13s 2025-06-02 20:21:45.404848 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.30s 2025-06-02 20:21:45.404857 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.11s 2025-06-02 20:21:45.404872 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.06s 2025-06-02 20:21:45.404880 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.00s 2025-06-02 20:21:45.404889 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.87s 2025-06-02 20:21:45.404898 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.75s 2025-06-02 20:21:45.404906 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.46s 2025-06-02 20:21:45.404915 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.39s 2025-06-02 20:21:45.404922 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.25s 2025-06-02 20:21:45.404930 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.48s 2025-06-02 20:21:45.404939 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.46s 2025-06-02 20:21:45.404946 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.41s 2025-06-02 20:21:45.404954 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.39s 2025-06-02 20:21:45.404961 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.29s 2025-06-02 20:21:45.404970 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.05s 2025-06-02 20:21:45.404979 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.40s 2025-06-02 20:21:45.404986 | orchestrator | 2025-06-02 20:21:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:48.449232 | orchestrator | 2025-06-02 20:21:48 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:48.451899 | orchestrator | 2025-06-02 20:21:48 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:48.453542 | orchestrator | 2025-06-02 20:21:48 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:48.453628 | orchestrator | 2025-06-02 20:21:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:51.494268 | orchestrator | 2025-06-02 20:21:51 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:51.494910 | orchestrator | 2025-06-02 20:21:51 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:51.495906 | orchestrator | 2025-06-02 20:21:51 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:51.495920 | orchestrator | 2025-06-02 20:21:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:54.539903 | orchestrator | 2025-06-02 20:21:54 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:54.542246 | orchestrator | 2025-06-02 20:21:54 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state STARTED 2025-06-02 20:21:54.544032 | orchestrator | 2025-06-02 20:21:54 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:54.544507 | orchestrator | 2025-06-02 20:21:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:57.586565 | orchestrator | 2025-06-02 20:21:57 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:21:57.587957 | orchestrator | 2025-06-02 20:21:57 | INFO  | Task 9e6cccf0-4095-4684-8c63-6448ed2fe39e is in state SUCCESS 2025-06-02 20:21:57.589941 | orchestrator | 2025-06-02 20:21:57.590002 | orchestrator | 2025-06-02 20:21:57.590011 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:21:57.590091 | orchestrator | 2025-06-02 20:21:57.590098 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:21:57.590152 | orchestrator | Monday 02 June 2025 20:19:44 +0000 (0:00:00.246) 0:00:00.246 *********** 2025-06-02 20:21:57.590158 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:21:57.590165 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:21:57.590171 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:21:57.590177 | orchestrator | 2025-06-02 20:21:57.590183 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:21:57.590189 | orchestrator | Monday 02 June 2025 20:19:44 +0000 (0:00:00.307) 0:00:00.553 *********** 2025-06-02 20:21:57.590195 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-02 20:21:57.590202 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-02 20:21:57.590207 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-02 20:21:57.590213 | orchestrator | 2025-06-02 20:21:57.590219 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-02 20:21:57.590225 | orchestrator | 2025-06-02 20:21:57.590231 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 20:21:57.590237 | orchestrator | Monday 02 June 2025 20:19:44 +0000 (0:00:00.408) 0:00:00.962 *********** 2025-06-02 20:21:57.590243 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:21:57.590250 | orchestrator | 2025-06-02 20:21:57.590256 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-02 20:21:57.590261 | orchestrator | Monday 02 June 2025 20:19:45 +0000 (0:00:00.469) 0:00:01.432 *********** 2025-06-02 20:21:57.590280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590303 | orchestrator | 2025-06-02 20:21:57.590309 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-02 20:21:57.590315 | orchestrator | Monday 02 June 2025 20:19:46 +0000 (0:00:00.822) 0:00:02.254 *********** 2025-06-02 20:21:57.590326 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-02 20:21:57.590333 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-02 20:21:57.590406 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:21:57.590413 | orchestrator | 2025-06-02 20:21:57.590419 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 20:21:57.590425 | orchestrator | Monday 02 June 2025 20:19:46 +0000 (0:00:00.729) 0:00:02.983 *********** 2025-06-02 20:21:57.590431 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:21:57.590437 | orchestrator | 2025-06-02 20:21:57.590443 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-02 20:21:57.590448 | orchestrator | Monday 02 June 2025 20:19:47 +0000 (0:00:00.639) 0:00:03.623 *********** 2025-06-02 20:21:57.590467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590492 | orchestrator | 2025-06-02 20:21:57.590498 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-02 20:21:57.590504 | orchestrator | Monday 02 June 2025 20:19:49 +0000 (0:00:01.483) 0:00:05.106 *********** 2025-06-02 20:21:57.590511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:21:57.590518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:21:57.590533 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:57.590543 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:57.590560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:21:57.590570 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:57.590582 | orchestrator | 2025-06-02 20:21:57.590596 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-02 20:21:57.590607 | orchestrator | Monday 02 June 2025 20:19:49 +0000 (0:00:00.339) 0:00:05.446 *********** 2025-06-02 20:21:57.590616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:21:57.590632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:21:57.590642 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:57.590651 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:57.590683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:21:57.590695 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:57.590704 | orchestrator | 2025-06-02 20:21:57.590714 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-02 20:21:57.590756 | orchestrator | Monday 02 June 2025 20:19:50 +0000 (0:00:00.802) 0:00:06.249 *********** 2025-06-02 20:21:57.590767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590797 | orchestrator | 2025-06-02 20:21:57.590804 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-02 20:21:57.590810 | orchestrator | Monday 02 June 2025 20:19:51 +0000 (0:00:01.297) 0:00:07.546 *********** 2025-06-02 20:21:57.590816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.590844 | orchestrator | 2025-06-02 20:21:57.590850 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-02 20:21:57.590856 | orchestrator | Monday 02 June 2025 20:19:52 +0000 (0:00:01.297) 0:00:08.844 *********** 2025-06-02 20:21:57.590862 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:57.590868 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:57.590873 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:57.590879 | orchestrator | 2025-06-02 20:21:57.590885 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-02 20:21:57.590891 | orchestrator | Monday 02 June 2025 20:19:53 +0000 (0:00:00.533) 0:00:09.377 *********** 2025-06-02 20:21:57.590897 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 20:21:57.590903 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 20:21:57.590909 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 20:21:57.590914 | orchestrator | 2025-06-02 20:21:57.590920 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-02 20:21:57.590926 | orchestrator | Monday 02 June 2025 20:19:54 +0000 (0:00:01.222) 0:00:10.599 *********** 2025-06-02 20:21:57.590932 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 20:21:57.590938 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 20:21:57.590944 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 20:21:57.590950 | orchestrator | 2025-06-02 20:21:57.590956 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-02 20:21:57.590975 | orchestrator | Monday 02 June 2025 20:19:55 +0000 (0:00:01.356) 0:00:11.956 *********** 2025-06-02 20:21:57.590985 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:21:57.590991 | orchestrator | 2025-06-02 20:21:57.590997 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-02 20:21:57.591003 | orchestrator | Monday 02 June 2025 20:19:56 +0000 (0:00:00.743) 0:00:12.700 *********** 2025-06-02 20:21:57.591009 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-02 20:21:57.591014 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-02 20:21:57.591020 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:21:57.591026 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:21:57.591032 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:21:57.591038 | orchestrator | 2025-06-02 20:21:57.591044 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-02 20:21:57.591049 | orchestrator | Monday 02 June 2025 20:19:57 +0000 (0:00:00.703) 0:00:13.404 *********** 2025-06-02 20:21:57.591055 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:57.591061 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:57.591067 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:57.591073 | orchestrator | 2025-06-02 20:21:57.591078 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-02 20:21:57.591084 | orchestrator | Monday 02 June 2025 20:19:57 +0000 (0:00:00.529) 0:00:13.933 *********** 2025-06-02 20:21:57.591094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1319389, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1388845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1319389, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1388845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1319389, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1388845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1319371, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1328845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1319371, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1328845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1319371, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1328845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1319364, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1308844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1319364, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1308844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1319364, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1308844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1319381, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1348844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1319381, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1348844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1319381, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1348844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1319352, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1258843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1319352, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1258843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1319352, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1258843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1319367, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1308844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1319367, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1308844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1319367, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1308844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1319378, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1348844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1319378, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1348844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1319378, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1348844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1319350, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1258843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1319350, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1258843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1319350, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1258843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1319330, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1188843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1319330, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1188843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1319330, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1188843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1319355, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1268845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1319355, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1268845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1319355, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1268845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1319337, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1218843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1319337, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1218843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1319337, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1218843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1319375, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1338844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1319375, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1338844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1319375, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1338844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1319359, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1288843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1319359, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1288843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1319359, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1288843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1319385, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1368845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1319385, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1368845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1319385, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1368845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1319345, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1248844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1319345, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1248844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1319345, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1248844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1319369, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1318843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1319369, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1318843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1319369, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1318843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1319331, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1208842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1319331, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1208842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1319331, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1208842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1319341, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1228843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1319341, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1228843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1319341, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1228843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1319362, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1298845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1319362, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1298845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1319362, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1298845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319468, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.159885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319468, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.159885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319468, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.159885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1319459, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1518848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1319459, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1518848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1319459, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1518848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1319399, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1388845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1319399, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1388845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1319399, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1388845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319507, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.164885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319507, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.164885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319507, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.164885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.591852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1319405, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1398845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1319405, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1398845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1319405, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1398845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319503, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1638849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319503, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1638849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319503, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1638849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319511, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.167885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319511, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.167885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319511, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.167885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319487, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1618848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319487, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1618848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319487, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1618848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319498, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1638849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319498, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1638849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319498, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1638849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1319409, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1408846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1319409, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1408846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1319409, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1408846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319461, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1518848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319461, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1518848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319461, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1518848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319519, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.167885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319519, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.167885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319519, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.167885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319505, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.164885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319505, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.164885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319505, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.164885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1319420, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1428845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1319420, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1428845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1319420, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1428845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1319416, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1418846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1319416, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1418846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1319416, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1418846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1319427, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1438847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1319427, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1438847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1319427, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1438847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1319435, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1508846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1319435, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1508846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1319435, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1508846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319462, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1528847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319462, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1528847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319462, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1528847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319495, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.162885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319495, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.162885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319495, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.162885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319464, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1528847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319464, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1528847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319464, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.1528847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319523, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.169885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319523, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.169885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319523, 'dev': 100, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748892814.169885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:21:57.592931 | orchestrator | 2025-06-02 20:21:57.592940 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-02 20:21:57.592946 | orchestrator | Monday 02 June 2025 20:20:35 +0000 (0:00:37.800) 0:00:51.734 *********** 2025-06-02 20:21:57.592952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.592963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.592969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:21:57.592975 | orchestrator | 2025-06-02 20:21:57.592981 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-02 20:21:57.592987 | orchestrator | Monday 02 June 2025 20:20:36 +0000 (0:00:01.126) 0:00:52.860 *********** 2025-06-02 20:21:57.592993 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:57.592999 | orchestrator | 2025-06-02 20:21:57.593005 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-02 20:21:57.593011 | orchestrator | Monday 02 June 2025 20:20:39 +0000 (0:00:02.339) 0:00:55.200 *********** 2025-06-02 20:21:57.593016 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:57.593022 | orchestrator | 2025-06-02 20:21:57.593028 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 20:21:57.593034 | orchestrator | Monday 02 June 2025 20:20:41 +0000 (0:00:02.246) 0:00:57.447 *********** 2025-06-02 20:21:57.593040 | orchestrator | 2025-06-02 20:21:57.593045 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 20:21:57.593051 | orchestrator | Monday 02 June 2025 20:20:41 +0000 (0:00:00.235) 0:00:57.683 *********** 2025-06-02 20:21:57.593056 | orchestrator | 2025-06-02 20:21:57.593062 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 20:21:57.593068 | orchestrator | Monday 02 June 2025 20:20:41 +0000 (0:00:00.075) 0:00:57.758 *********** 2025-06-02 20:21:57.593073 | orchestrator | 2025-06-02 20:21:57.593079 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-02 20:21:57.593085 | orchestrator | Monday 02 June 2025 20:20:41 +0000 (0:00:00.063) 0:00:57.822 *********** 2025-06-02 20:21:57.593091 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:57.593096 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:57.593137 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:21:57.593143 | orchestrator | 2025-06-02 20:21:57.593150 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-02 20:21:57.593156 | orchestrator | Monday 02 June 2025 20:20:43 +0000 (0:00:01.883) 0:00:59.705 *********** 2025-06-02 20:21:57.593162 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:57.593173 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:57.593179 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-02 20:21:57.593186 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-02 20:21:57.593193 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-02 20:21:57.593199 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:21:57.593205 | orchestrator | 2025-06-02 20:21:57.593211 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-02 20:21:57.593224 | orchestrator | Monday 02 June 2025 20:21:22 +0000 (0:00:38.895) 0:01:38.600 *********** 2025-06-02 20:21:57.593231 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:57.593241 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:21:57.593247 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:21:57.593253 | orchestrator | 2025-06-02 20:21:57.593260 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-02 20:21:57.593266 | orchestrator | Monday 02 June 2025 20:21:49 +0000 (0:00:26.774) 0:02:05.375 *********** 2025-06-02 20:21:57.593272 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:21:57.593278 | orchestrator | 2025-06-02 20:21:57.593285 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-02 20:21:57.593291 | orchestrator | Monday 02 June 2025 20:21:51 +0000 (0:00:02.439) 0:02:07.815 *********** 2025-06-02 20:21:57.593296 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:57.593302 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:21:57.593307 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:21:57.593313 | orchestrator | 2025-06-02 20:21:57.593318 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-02 20:21:57.593323 | orchestrator | Monday 02 June 2025 20:21:51 +0000 (0:00:00.275) 0:02:08.091 *********** 2025-06-02 20:21:57.593330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-02 20:21:57.593337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-02 20:21:57.593343 | orchestrator | 2025-06-02 20:21:57.593348 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-02 20:21:57.593354 | orchestrator | Monday 02 June 2025 20:21:54 +0000 (0:00:02.528) 0:02:10.619 *********** 2025-06-02 20:21:57.593359 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:21:57.593365 | orchestrator | 2025-06-02 20:21:57.593370 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:21:57.593376 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:21:57.593383 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:21:57.593388 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:21:57.593394 | orchestrator | 2025-06-02 20:21:57.593399 | orchestrator | 2025-06-02 20:21:57.593405 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:21:57.593410 | orchestrator | Monday 02 June 2025 20:21:54 +0000 (0:00:00.251) 0:02:10.871 *********** 2025-06-02 20:21:57.593416 | orchestrator | =============================================================================== 2025-06-02 20:21:57.593425 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.90s 2025-06-02 20:21:57.593431 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.80s 2025-06-02 20:21:57.593436 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 26.77s 2025-06-02 20:21:57.593441 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.53s 2025-06-02 20:21:57.593447 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.44s 2025-06-02 20:21:57.593452 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.34s 2025-06-02 20:21:57.593458 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.25s 2025-06-02 20:21:57.593463 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.88s 2025-06-02 20:21:57.593469 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.48s 2025-06-02 20:21:57.593474 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.36s 2025-06-02 20:21:57.593480 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.30s 2025-06-02 20:21:57.593485 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.30s 2025-06-02 20:21:57.593490 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2025-06-02 20:21:57.593496 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.13s 2025-06-02 20:21:57.593501 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.82s 2025-06-02 20:21:57.593507 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.80s 2025-06-02 20:21:57.593512 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2025-06-02 20:21:57.593518 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.73s 2025-06-02 20:21:57.593523 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.70s 2025-06-02 20:21:57.593528 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.64s 2025-06-02 20:21:57.593538 | orchestrator | 2025-06-02 20:21:57 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:21:57.593547 | orchestrator | 2025-06-02 20:21:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:00.628590 | orchestrator | 2025-06-02 20:22:00 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:22:00.631333 | orchestrator | 2025-06-02 20:22:00 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:00.631375 | orchestrator | 2025-06-02 20:22:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:03.673141 | orchestrator | 2025-06-02 20:22:03 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:22:03.674260 | orchestrator | 2025-06-02 20:22:03 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:03.674307 | orchestrator | 2025-06-02 20:22:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:06.712558 | orchestrator | 2025-06-02 20:22:06 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:22:06.713721 | orchestrator | 2025-06-02 20:22:06 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:06.713760 | orchestrator | 2025-06-02 20:22:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:09.761954 | orchestrator | 2025-06-02 20:22:09 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:22:09.762293 | orchestrator | 2025-06-02 20:22:09 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:09.763343 | orchestrator | 2025-06-02 20:22:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:12.809701 | orchestrator | 2025-06-02 20:22:12 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:22:12.809786 | orchestrator | 2025-06-02 20:22:12 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:12.809796 | orchestrator | 2025-06-02 20:22:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:15.850504 | orchestrator | 2025-06-02 20:22:15 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state STARTED 2025-06-02 20:22:15.851743 | orchestrator | 2025-06-02 20:22:15 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:15.851843 | orchestrator | 2025-06-02 20:22:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:18.905625 | orchestrator | 2025-06-02 20:22:18 | INFO  | Task b752ea01-5134-46bd-b76a-c976bd580bef is in state SUCCESS 2025-06-02 20:22:18.908482 | orchestrator | 2025-06-02 20:22:18.908632 | orchestrator | 2025-06-02 20:22:18.908646 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:22:18.908657 | orchestrator | 2025-06-02 20:22:18.908666 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-02 20:22:18.908688 | orchestrator | Monday 02 June 2025 20:13:15 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-06-02 20:22:18.908706 | orchestrator | changed: [testbed-manager] 2025-06-02 20:22:18.908717 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.908726 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:22:18.908735 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:22:18.908744 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.908753 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.908762 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.908770 | orchestrator | 2025-06-02 20:22:18.908779 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:22:18.908788 | orchestrator | Monday 02 June 2025 20:13:16 +0000 (0:00:00.726) 0:00:00.985 *********** 2025-06-02 20:22:18.908797 | orchestrator | changed: [testbed-manager] 2025-06-02 20:22:18.908806 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.908814 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:22:18.908823 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:22:18.908832 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.908841 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.908849 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.908858 | orchestrator | 2025-06-02 20:22:18.908867 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:22:18.908876 | orchestrator | Monday 02 June 2025 20:13:16 +0000 (0:00:00.633) 0:00:01.618 *********** 2025-06-02 20:22:18.908884 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-02 20:22:18.908894 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 20:22:18.908902 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 20:22:18.908911 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 20:22:18.908975 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-02 20:22:18.909050 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-02 20:22:18.909062 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-02 20:22:18.909096 | orchestrator | 2025-06-02 20:22:18.909106 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-02 20:22:18.909132 | orchestrator | 2025-06-02 20:22:18.909143 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 20:22:18.909154 | orchestrator | Monday 02 June 2025 20:13:17 +0000 (0:00:00.746) 0:00:02.365 *********** 2025-06-02 20:22:18.909183 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:22:18.909222 | orchestrator | 2025-06-02 20:22:18.909253 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-02 20:22:18.909264 | orchestrator | Monday 02 June 2025 20:13:18 +0000 (0:00:00.641) 0:00:03.006 *********** 2025-06-02 20:22:18.909275 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-02 20:22:18.909287 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-02 20:22:18.909297 | orchestrator | 2025-06-02 20:22:18.909307 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-02 20:22:18.909318 | orchestrator | Monday 02 June 2025 20:13:22 +0000 (0:00:04.136) 0:00:07.142 *********** 2025-06-02 20:22:18.909328 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:22:18.909336 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:22:18.909345 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.909353 | orchestrator | 2025-06-02 20:22:18.909362 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 20:22:18.909371 | orchestrator | Monday 02 June 2025 20:13:26 +0000 (0:00:04.202) 0:00:11.344 *********** 2025-06-02 20:22:18.909379 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.909388 | orchestrator | 2025-06-02 20:22:18.909397 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-02 20:22:18.909408 | orchestrator | Monday 02 June 2025 20:13:27 +0000 (0:00:00.900) 0:00:12.245 *********** 2025-06-02 20:22:18.909422 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.909441 | orchestrator | 2025-06-02 20:22:18.909462 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-02 20:22:18.909476 | orchestrator | Monday 02 June 2025 20:13:28 +0000 (0:00:01.523) 0:00:13.769 *********** 2025-06-02 20:22:18.909490 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.909504 | orchestrator | 2025-06-02 20:22:18.909518 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 20:22:18.909532 | orchestrator | Monday 02 June 2025 20:13:32 +0000 (0:00:03.414) 0:00:17.184 *********** 2025-06-02 20:22:18.909546 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.909560 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.909575 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.909589 | orchestrator | 2025-06-02 20:22:18.909603 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 20:22:18.909617 | orchestrator | Monday 02 June 2025 20:13:32 +0000 (0:00:00.566) 0:00:17.750 *********** 2025-06-02 20:22:18.909626 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:22:18.909635 | orchestrator | 2025-06-02 20:22:18.909644 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-02 20:22:18.909653 | orchestrator | Monday 02 June 2025 20:14:02 +0000 (0:00:29.971) 0:00:47.725 *********** 2025-06-02 20:22:18.909662 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.909670 | orchestrator | 2025-06-02 20:22:18.909679 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 20:22:18.909687 | orchestrator | Monday 02 June 2025 20:14:18 +0000 (0:00:15.808) 0:01:03.534 *********** 2025-06-02 20:22:18.909696 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:22:18.909705 | orchestrator | 2025-06-02 20:22:18.909772 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 20:22:18.909782 | orchestrator | Monday 02 June 2025 20:14:31 +0000 (0:00:12.571) 0:01:16.106 *********** 2025-06-02 20:22:18.909807 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:22:18.909816 | orchestrator | 2025-06-02 20:22:18.909825 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-02 20:22:18.909834 | orchestrator | Monday 02 June 2025 20:14:32 +0000 (0:00:00.960) 0:01:17.066 *********** 2025-06-02 20:22:18.909843 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.909852 | orchestrator | 2025-06-02 20:22:18.909860 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 20:22:18.909869 | orchestrator | Monday 02 June 2025 20:14:32 +0000 (0:00:00.364) 0:01:17.430 *********** 2025-06-02 20:22:18.909888 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:22:18.909897 | orchestrator | 2025-06-02 20:22:18.909905 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 20:22:18.909914 | orchestrator | Monday 02 June 2025 20:14:32 +0000 (0:00:00.369) 0:01:17.800 *********** 2025-06-02 20:22:18.909923 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:22:18.909931 | orchestrator | 2025-06-02 20:22:18.909940 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 20:22:18.909949 | orchestrator | Monday 02 June 2025 20:14:51 +0000 (0:00:18.179) 0:01:35.979 *********** 2025-06-02 20:22:18.909958 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.909966 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.909975 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.909984 | orchestrator | 2025-06-02 20:22:18.909992 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-02 20:22:18.910001 | orchestrator | 2025-06-02 20:22:18.910010 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 20:22:18.910097 | orchestrator | Monday 02 June 2025 20:14:51 +0000 (0:00:00.347) 0:01:36.327 *********** 2025-06-02 20:22:18.910106 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:22:18.910115 | orchestrator | 2025-06-02 20:22:18.910123 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-02 20:22:18.910132 | orchestrator | Monday 02 June 2025 20:14:51 +0000 (0:00:00.587) 0:01:36.914 *********** 2025-06-02 20:22:18.910141 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910149 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910158 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.910166 | orchestrator | 2025-06-02 20:22:18.910175 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-02 20:22:18.910183 | orchestrator | Monday 02 June 2025 20:14:54 +0000 (0:00:02.098) 0:01:39.013 *********** 2025-06-02 20:22:18.910192 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910200 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910209 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.910218 | orchestrator | 2025-06-02 20:22:18.910233 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 20:22:18.910242 | orchestrator | Monday 02 June 2025 20:14:56 +0000 (0:00:02.314) 0:01:41.327 *********** 2025-06-02 20:22:18.910251 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.910260 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910268 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910277 | orchestrator | 2025-06-02 20:22:18.910286 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 20:22:18.910294 | orchestrator | Monday 02 June 2025 20:14:57 +0000 (0:00:00.958) 0:01:42.286 *********** 2025-06-02 20:22:18.910303 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 20:22:18.910311 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910320 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 20:22:18.910329 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910338 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 20:22:18.910346 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-02 20:22:18.910355 | orchestrator | 2025-06-02 20:22:18.910363 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 20:22:18.910372 | orchestrator | Monday 02 June 2025 20:15:06 +0000 (0:00:09.297) 0:01:51.583 *********** 2025-06-02 20:22:18.910381 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.910389 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910398 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910406 | orchestrator | 2025-06-02 20:22:18.910415 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 20:22:18.910430 | orchestrator | Monday 02 June 2025 20:15:07 +0000 (0:00:00.406) 0:01:51.990 *********** 2025-06-02 20:22:18.910439 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 20:22:18.910448 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.910456 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 20:22:18.910465 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910473 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 20:22:18.910482 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910490 | orchestrator | 2025-06-02 20:22:18.910499 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 20:22:18.910508 | orchestrator | Monday 02 June 2025 20:15:07 +0000 (0:00:00.833) 0:01:52.823 *********** 2025-06-02 20:22:18.910516 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910525 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.910533 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910542 | orchestrator | 2025-06-02 20:22:18.910550 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-02 20:22:18.910559 | orchestrator | Monday 02 June 2025 20:15:08 +0000 (0:00:00.541) 0:01:53.365 *********** 2025-06-02 20:22:18.910567 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910576 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910585 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.910593 | orchestrator | 2025-06-02 20:22:18.910602 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-02 20:22:18.910611 | orchestrator | Monday 02 June 2025 20:15:09 +0000 (0:00:00.929) 0:01:54.295 *********** 2025-06-02 20:22:18.910619 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910628 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910653 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.910663 | orchestrator | 2025-06-02 20:22:18.910671 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-02 20:22:18.910680 | orchestrator | Monday 02 June 2025 20:15:12 +0000 (0:00:02.685) 0:01:56.980 *********** 2025-06-02 20:22:18.910688 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910697 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910706 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:22:18.910714 | orchestrator | 2025-06-02 20:22:18.910723 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 20:22:18.910731 | orchestrator | Monday 02 June 2025 20:15:33 +0000 (0:00:21.119) 0:02:18.099 *********** 2025-06-02 20:22:18.910740 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910749 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910757 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:22:18.910766 | orchestrator | 2025-06-02 20:22:18.910775 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 20:22:18.910784 | orchestrator | Monday 02 June 2025 20:15:44 +0000 (0:00:11.811) 0:02:29.911 *********** 2025-06-02 20:22:18.910792 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:22:18.910801 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910810 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910818 | orchestrator | 2025-06-02 20:22:18.910827 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-02 20:22:18.910835 | orchestrator | Monday 02 June 2025 20:15:45 +0000 (0:00:00.770) 0:02:30.681 *********** 2025-06-02 20:22:18.910844 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910852 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910861 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.910870 | orchestrator | 2025-06-02 20:22:18.910878 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-02 20:22:18.910887 | orchestrator | Monday 02 June 2025 20:15:56 +0000 (0:00:11.005) 0:02:41.686 *********** 2025-06-02 20:22:18.910896 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.910904 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910919 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910927 | orchestrator | 2025-06-02 20:22:18.910936 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 20:22:18.910944 | orchestrator | Monday 02 June 2025 20:15:58 +0000 (0:00:01.493) 0:02:43.180 *********** 2025-06-02 20:22:18.910953 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.910962 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.910970 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.910979 | orchestrator | 2025-06-02 20:22:18.910987 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-02 20:22:18.910996 | orchestrator | 2025-06-02 20:22:18.911004 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 20:22:18.911017 | orchestrator | Monday 02 June 2025 20:15:58 +0000 (0:00:00.366) 0:02:43.546 *********** 2025-06-02 20:22:18.911050 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:22:18.911060 | orchestrator | 2025-06-02 20:22:18.911148 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-02 20:22:18.911159 | orchestrator | Monday 02 June 2025 20:15:59 +0000 (0:00:00.544) 0:02:44.091 *********** 2025-06-02 20:22:18.911206 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-02 20:22:18.911218 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-02 20:22:18.911316 | orchestrator | 2025-06-02 20:22:18.911344 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-02 20:22:18.911353 | orchestrator | Monday 02 June 2025 20:16:02 +0000 (0:00:03.270) 0:02:47.361 *********** 2025-06-02 20:22:18.911361 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-02 20:22:18.911370 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-02 20:22:18.911378 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-02 20:22:18.911386 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-02 20:22:18.911394 | orchestrator | 2025-06-02 20:22:18.911402 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-02 20:22:18.911410 | orchestrator | Monday 02 June 2025 20:16:09 +0000 (0:00:07.508) 0:02:54.870 *********** 2025-06-02 20:22:18.911418 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:22:18.911426 | orchestrator | 2025-06-02 20:22:18.911434 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-02 20:22:18.911442 | orchestrator | Monday 02 June 2025 20:16:13 +0000 (0:00:03.211) 0:02:58.082 *********** 2025-06-02 20:22:18.911450 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:22:18.911458 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-02 20:22:18.911466 | orchestrator | 2025-06-02 20:22:18.911473 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-02 20:22:18.911482 | orchestrator | Monday 02 June 2025 20:16:17 +0000 (0:00:04.066) 0:03:02.148 *********** 2025-06-02 20:22:18.911490 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:22:18.911498 | orchestrator | 2025-06-02 20:22:18.911505 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-02 20:22:18.911513 | orchestrator | Monday 02 June 2025 20:16:20 +0000 (0:00:03.669) 0:03:05.818 *********** 2025-06-02 20:22:18.911521 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-02 20:22:18.911529 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-02 20:22:18.911536 | orchestrator | 2025-06-02 20:22:18.911544 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 20:22:18.911558 | orchestrator | Monday 02 June 2025 20:16:28 +0000 (0:00:07.406) 0:03:13.225 *********** 2025-06-02 20:22:18.911579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.911597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.911608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.911624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.911640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.911649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.911657 | orchestrator | 2025-06-02 20:22:18.911668 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-02 20:22:18.911682 | orchestrator | Monday 02 June 2025 20:16:29 +0000 (0:00:01.698) 0:03:14.923 *********** 2025-06-02 20:22:18.911694 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.911702 | orchestrator | 2025-06-02 20:22:18.911710 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-02 20:22:18.911794 | orchestrator | Monday 02 June 2025 20:16:30 +0000 (0:00:00.294) 0:03:15.218 *********** 2025-06-02 20:22:18.911803 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.911828 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.911836 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.911844 | orchestrator | 2025-06-02 20:22:18.911897 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-02 20:22:18.911907 | orchestrator | Monday 02 June 2025 20:16:31 +0000 (0:00:00.941) 0:03:16.160 *********** 2025-06-02 20:22:18.911915 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:22:18.911923 | orchestrator | 2025-06-02 20:22:18.911931 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-02 20:22:18.911939 | orchestrator | Monday 02 June 2025 20:16:31 +0000 (0:00:00.771) 0:03:16.932 *********** 2025-06-02 20:22:18.911947 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.911955 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.911963 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.911970 | orchestrator | 2025-06-02 20:22:18.911978 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 20:22:18.911986 | orchestrator | Monday 02 June 2025 20:16:32 +0000 (0:00:00.267) 0:03:17.199 *********** 2025-06-02 20:22:18.911994 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:22:18.912002 | orchestrator | 2025-06-02 20:22:18.912010 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 20:22:18.912017 | orchestrator | Monday 02 June 2025 20:16:32 +0000 (0:00:00.594) 0:03:17.794 *********** 2025-06-02 20:22:18.912026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912128 | orchestrator | 2025-06-02 20:22:18.912136 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 20:22:18.912144 | orchestrator | Monday 02 June 2025 20:16:35 +0000 (0:00:02.913) 0:03:20.708 *********** 2025-06-02 20:22:18.912153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:22:18.912166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.912175 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.912183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:22:18.912197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.912206 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.912221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:22:18.912231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.912240 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.912248 | orchestrator | 2025-06-02 20:22:18.912256 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 20:22:18.912264 | orchestrator | Monday 02 June 2025 20:16:37 +0000 (0:00:01.412) 0:03:22.121 *********** 2025-06-02 20:22:18.912279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:22:18.912293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.912302 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.912321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:22:18.912334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.912343 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.912355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:22:18.912369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.912378 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.912386 | orchestrator | 2025-06-02 20:22:18.912394 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-02 20:22:18.912401 | orchestrator | Monday 02 June 2025 20:16:38 +0000 (0:00:01.264) 0:03:23.385 *********** 2025-06-02 20:22:18.912417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912485 | orchestrator | 2025-06-02 20:22:18.912493 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-02 20:22:18.912501 | orchestrator | Monday 02 June 2025 20:16:41 +0000 (0:00:02.720) 0:03:26.105 *********** 2025-06-02 20:22:18.912513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912585 | orchestrator | 2025-06-02 20:22:18.912593 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-02 20:22:18.912602 | orchestrator | Monday 02 June 2025 20:16:49 +0000 (0:00:08.411) 0:03:34.517 *********** 2025-06-02 20:22:18.912610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:22:18.912624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.912632 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.912641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:22:18.912653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.912666 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.912675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:22:18.912684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.912692 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.912700 | orchestrator | 2025-06-02 20:22:18.912708 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-02 20:22:18.912716 | orchestrator | Monday 02 June 2025 20:16:50 +0000 (0:00:00.898) 0:03:35.416 *********** 2025-06-02 20:22:18.912724 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.912732 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:22:18.912740 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:22:18.912748 | orchestrator | 2025-06-02 20:22:18.912760 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-02 20:22:18.912768 | orchestrator | Monday 02 June 2025 20:16:52 +0000 (0:00:02.248) 0:03:37.665 *********** 2025-06-02 20:22:18.912776 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.912784 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.912792 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.912800 | orchestrator | 2025-06-02 20:22:18.912808 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-02 20:22:18.912816 | orchestrator | Monday 02 June 2025 20:16:53 +0000 (0:00:00.292) 0:03:37.957 *********** 2025-06-02 20:22:18.912824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:22:18.912867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.912896 | orchestrator | 2025-06-02 20:22:18.912904 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 20:22:18.912918 | orchestrator | Monday 02 June 2025 20:16:54 +0000 (0:00:01.875) 0:03:39.833 *********** 2025-06-02 20:22:18.912927 | orchestrator | 2025-06-02 20:22:18.912934 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 20:22:18.912942 | orchestrator | Monday 02 June 2025 20:16:55 +0000 (0:00:00.137) 0:03:39.971 *********** 2025-06-02 20:22:18.912950 | orchestrator | 2025-06-02 20:22:18.912958 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 20:22:18.912966 | orchestrator | Monday 02 June 2025 20:16:55 +0000 (0:00:00.151) 0:03:40.123 *********** 2025-06-02 20:22:18.912973 | orchestrator | 2025-06-02 20:22:18.912981 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-02 20:22:18.912989 | orchestrator | Monday 02 June 2025 20:16:55 +0000 (0:00:00.273) 0:03:40.396 *********** 2025-06-02 20:22:18.912997 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.913005 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:22:18.913013 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:22:18.913021 | orchestrator | 2025-06-02 20:22:18.913029 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-02 20:22:18.913036 | orchestrator | Monday 02 June 2025 20:17:16 +0000 (0:00:21.532) 0:04:01.929 *********** 2025-06-02 20:22:18.913044 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.913052 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:22:18.913060 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:22:18.913115 | orchestrator | 2025-06-02 20:22:18.913124 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-02 20:22:18.913132 | orchestrator | 2025-06-02 20:22:18.913140 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 20:22:18.913148 | orchestrator | Monday 02 June 2025 20:17:30 +0000 (0:00:13.471) 0:04:15.400 *********** 2025-06-02 20:22:18.913157 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:22:18.913165 | orchestrator | 2025-06-02 20:22:18.913173 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 20:22:18.913181 | orchestrator | Monday 02 June 2025 20:17:31 +0000 (0:00:01.382) 0:04:16.783 *********** 2025-06-02 20:22:18.913189 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.913197 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.913205 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.913213 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.913221 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.913229 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.913237 | orchestrator | 2025-06-02 20:22:18.913244 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-02 20:22:18.913252 | orchestrator | Monday 02 June 2025 20:17:33 +0000 (0:00:01.387) 0:04:18.170 *********** 2025-06-02 20:22:18.913260 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.913274 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.913282 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.913290 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:22:18.913298 | orchestrator | 2025-06-02 20:22:18.913306 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 20:22:18.913320 | orchestrator | Monday 02 June 2025 20:17:34 +0000 (0:00:01.345) 0:04:19.516 *********** 2025-06-02 20:22:18.913328 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-02 20:22:18.913336 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-02 20:22:18.913344 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-02 20:22:18.913352 | orchestrator | 2025-06-02 20:22:18.913360 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 20:22:18.913368 | orchestrator | Monday 02 June 2025 20:17:35 +0000 (0:00:01.172) 0:04:20.688 *********** 2025-06-02 20:22:18.913376 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-02 20:22:18.913384 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-02 20:22:18.913392 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-02 20:22:18.913400 | orchestrator | 2025-06-02 20:22:18.913407 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 20:22:18.913415 | orchestrator | Monday 02 June 2025 20:17:37 +0000 (0:00:01.550) 0:04:22.239 *********** 2025-06-02 20:22:18.913423 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-02 20:22:18.913432 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.913440 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-02 20:22:18.913448 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.913455 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-02 20:22:18.913463 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.913471 | orchestrator | 2025-06-02 20:22:18.913479 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-02 20:22:18.913487 | orchestrator | Monday 02 June 2025 20:17:38 +0000 (0:00:01.139) 0:04:23.379 *********** 2025-06-02 20:22:18.913495 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:22:18.913503 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:22:18.913511 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.913519 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:22:18.913526 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:22:18.913534 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.913542 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:22:18.913550 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:22:18.913558 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.913566 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 20:22:18.913579 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 20:22:18.913587 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 20:22:18.913595 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 20:22:18.913603 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 20:22:18.913611 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 20:22:18.913619 | orchestrator | 2025-06-02 20:22:18.913627 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-02 20:22:18.913635 | orchestrator | Monday 02 June 2025 20:17:39 +0000 (0:00:01.324) 0:04:24.703 *********** 2025-06-02 20:22:18.913648 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.913656 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.913664 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.913671 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.913678 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.913684 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.913691 | orchestrator | 2025-06-02 20:22:18.913699 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-02 20:22:18.913705 | orchestrator | Monday 02 June 2025 20:17:41 +0000 (0:00:01.700) 0:04:26.403 *********** 2025-06-02 20:22:18.913712 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.913719 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.913725 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.913732 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.913739 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.913746 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.913752 | orchestrator | 2025-06-02 20:22:18.913759 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 20:22:18.913766 | orchestrator | Monday 02 June 2025 20:17:43 +0000 (0:00:02.119) 0:04:28.523 *********** 2025-06-02 20:22:18.913773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913827 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913846 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.913884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914483 | orchestrator | 2025-06-02 20:22:18.914490 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 20:22:18.914497 | orchestrator | Monday 02 June 2025 20:17:47 +0000 (0:00:03.891) 0:04:32.414 *********** 2025-06-02 20:22:18.914505 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:22:18.914523 | orchestrator | 2025-06-02 20:22:18.914529 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 20:22:18.914536 | orchestrator | Monday 02 June 2025 20:17:48 +0000 (0:00:01.021) 0:04:33.436 *********** 2025-06-02 20:22:18.914550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914655 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.914735 | orchestrator | 2025-06-02 20:22:18.914742 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 20:22:18.914762 | orchestrator | Monday 02 June 2025 20:17:51 +0000 (0:00:03.485) 0:04:36.922 *********** 2025-06-02 20:22:18.914819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.914833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.914862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.914873 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.914881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.914888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.914917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.914925 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.914932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.914944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.914956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.914963 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.914970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:22:18.914978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.914985 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.915013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:22:18.915027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.915035 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.915043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:22:18.915054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.915062 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.915094 | orchestrator | 2025-06-02 20:22:18.915104 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 20:22:18.915112 | orchestrator | Monday 02 June 2025 20:17:55 +0000 (0:00:03.090) 0:04:40.012 *********** 2025-06-02 20:22:18.915121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.915129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.915160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.915175 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.915183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.915195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.915204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.915212 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.915220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.915247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.915262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.915270 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.915278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:22:18.915290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.915298 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.915306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:22:18.915315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.915323 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.915330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:22:18.915365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.915374 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.915382 | orchestrator | 2025-06-02 20:22:18.915390 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 20:22:18.915398 | orchestrator | Monday 02 June 2025 20:17:58 +0000 (0:00:03.348) 0:04:43.360 *********** 2025-06-02 20:22:18.915405 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.915412 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.915418 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.915425 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:22:18.915432 | orchestrator | 2025-06-02 20:22:18.915439 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-02 20:22:18.915446 | orchestrator | Monday 02 June 2025 20:17:59 +0000 (0:00:01.073) 0:04:44.434 *********** 2025-06-02 20:22:18.915456 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 20:22:18.915467 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 20:22:18.915480 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 20:22:18.915496 | orchestrator | 2025-06-02 20:22:18.915508 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-02 20:22:18.915520 | orchestrator | Monday 02 June 2025 20:18:00 +0000 (0:00:01.457) 0:04:45.892 *********** 2025-06-02 20:22:18.915531 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 20:22:18.915542 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 20:22:18.915560 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 20:22:18.915570 | orchestrator | 2025-06-02 20:22:18.915582 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-02 20:22:18.915593 | orchestrator | Monday 02 June 2025 20:18:02 +0000 (0:00:01.162) 0:04:47.054 *********** 2025-06-02 20:22:18.915604 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:22:18.915616 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:22:18.915628 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:22:18.915640 | orchestrator | 2025-06-02 20:22:18.915652 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-02 20:22:18.915667 | orchestrator | Monday 02 June 2025 20:18:02 +0000 (0:00:00.463) 0:04:47.517 *********** 2025-06-02 20:22:18.915675 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:22:18.915686 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:22:18.915693 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:22:18.915700 | orchestrator | 2025-06-02 20:22:18.915707 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-02 20:22:18.915713 | orchestrator | Monday 02 June 2025 20:18:03 +0000 (0:00:00.574) 0:04:48.092 *********** 2025-06-02 20:22:18.915720 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 20:22:18.915727 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 20:22:18.915733 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 20:22:18.915740 | orchestrator | 2025-06-02 20:22:18.915747 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-02 20:22:18.915760 | orchestrator | Monday 02 June 2025 20:18:04 +0000 (0:00:01.306) 0:04:49.399 *********** 2025-06-02 20:22:18.915768 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 20:22:18.915779 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 20:22:18.915789 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 20:22:18.915800 | orchestrator | 2025-06-02 20:22:18.915811 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-02 20:22:18.915820 | orchestrator | Monday 02 June 2025 20:18:05 +0000 (0:00:01.169) 0:04:50.569 *********** 2025-06-02 20:22:18.915831 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 20:22:18.915842 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 20:22:18.915853 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 20:22:18.915863 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-02 20:22:18.915878 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-02 20:22:18.915892 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-02 20:22:18.915902 | orchestrator | 2025-06-02 20:22:18.915913 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-02 20:22:18.915924 | orchestrator | Monday 02 June 2025 20:18:10 +0000 (0:00:04.731) 0:04:55.301 *********** 2025-06-02 20:22:18.915934 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.915944 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.915954 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.915963 | orchestrator | 2025-06-02 20:22:18.915973 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-02 20:22:18.915983 | orchestrator | Monday 02 June 2025 20:18:10 +0000 (0:00:00.327) 0:04:55.628 *********** 2025-06-02 20:22:18.915993 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.916002 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.916012 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.916022 | orchestrator | 2025-06-02 20:22:18.916032 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-02 20:22:18.916042 | orchestrator | Monday 02 June 2025 20:18:11 +0000 (0:00:00.539) 0:04:56.168 *********** 2025-06-02 20:22:18.916052 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.916062 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.916132 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.916143 | orchestrator | 2025-06-02 20:22:18.916207 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-02 20:22:18.916219 | orchestrator | Monday 02 June 2025 20:18:13 +0000 (0:00:01.880) 0:04:58.049 *********** 2025-06-02 20:22:18.916231 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 20:22:18.916243 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 20:22:18.916255 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 20:22:18.916266 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 20:22:18.916277 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 20:22:18.916287 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 20:22:18.916298 | orchestrator | 2025-06-02 20:22:18.916309 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-02 20:22:18.916321 | orchestrator | Monday 02 June 2025 20:18:16 +0000 (0:00:03.830) 0:05:01.880 *********** 2025-06-02 20:22:18.916342 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:22:18.916352 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:22:18.916362 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:22:18.916372 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:22:18.916383 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.916394 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:22:18.916405 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.916430 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:22:18.916437 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.916443 | orchestrator | 2025-06-02 20:22:18.916449 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-02 20:22:18.916456 | orchestrator | Monday 02 June 2025 20:18:20 +0000 (0:00:03.653) 0:05:05.534 *********** 2025-06-02 20:22:18.916462 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.916468 | orchestrator | 2025-06-02 20:22:18.916474 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-02 20:22:18.916487 | orchestrator | Monday 02 June 2025 20:18:20 +0000 (0:00:00.136) 0:05:05.670 *********** 2025-06-02 20:22:18.916493 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.916500 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.916506 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.916512 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.916518 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.916524 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.916531 | orchestrator | 2025-06-02 20:22:18.916537 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-02 20:22:18.916543 | orchestrator | Monday 02 June 2025 20:18:21 +0000 (0:00:00.913) 0:05:06.583 *********** 2025-06-02 20:22:18.916549 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 20:22:18.916555 | orchestrator | 2025-06-02 20:22:18.916562 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-02 20:22:18.916568 | orchestrator | Monday 02 June 2025 20:18:22 +0000 (0:00:00.749) 0:05:07.333 *********** 2025-06-02 20:22:18.916574 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.916580 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.916586 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.916592 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.916598 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.916605 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.916611 | orchestrator | 2025-06-02 20:22:18.916617 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-02 20:22:18.916623 | orchestrator | Monday 02 June 2025 20:18:23 +0000 (0:00:00.691) 0:05:08.025 *********** 2025-06-02 20:22:18.916631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.916886 | orchestrator | 2025-06-02 20:22:18.916896 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-02 20:22:18.916906 | orchestrator | Monday 02 June 2025 20:18:27 +0000 (0:00:04.297) 0:05:12.323 *********** 2025-06-02 20:22:18.916922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.916934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.916945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.916963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.916985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.916996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.917011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.917023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.917033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.917044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.917090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.917099 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.917110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.917116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.917123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.917129 | orchestrator | 2025-06-02 20:22:18.917136 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-02 20:22:18.917146 | orchestrator | Monday 02 June 2025 20:18:33 +0000 (0:00:06.490) 0:05:18.814 *********** 2025-06-02 20:22:18.917153 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.917159 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.917165 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.917171 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.917177 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.917183 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.917189 | orchestrator | 2025-06-02 20:22:18.917195 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-02 20:22:18.917202 | orchestrator | Monday 02 June 2025 20:18:35 +0000 (0:00:01.736) 0:05:20.550 *********** 2025-06-02 20:22:18.917208 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 20:22:18.917214 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 20:22:18.917220 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 20:22:18.917226 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 20:22:18.917237 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 20:22:18.917243 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 20:22:18.917249 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 20:22:18.917256 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.917263 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 20:22:18.917269 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.917275 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 20:22:18.917281 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.917287 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 20:22:18.917294 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 20:22:18.917300 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 20:22:18.917306 | orchestrator | 2025-06-02 20:22:18.917312 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-02 20:22:18.917319 | orchestrator | Monday 02 June 2025 20:18:39 +0000 (0:00:04.179) 0:05:24.730 *********** 2025-06-02 20:22:18.917325 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.917331 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.917337 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.917343 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.917349 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.917355 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.917361 | orchestrator | 2025-06-02 20:22:18.917368 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-02 20:22:18.917374 | orchestrator | Monday 02 June 2025 20:18:40 +0000 (0:00:01.094) 0:05:25.824 *********** 2025-06-02 20:22:18.917380 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 20:22:18.917386 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 20:22:18.917393 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 20:22:18.917402 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 20:22:18.917409 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 20:22:18.917422 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 20:22:18.917428 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 20:22:18.917434 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 20:22:18.917440 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 20:22:18.917446 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.917453 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 20:22:18.917459 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:22:18.917465 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 20:22:18.917471 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.917477 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 20:22:18.917483 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.917489 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:22:18.917496 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:22:18.917502 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:22:18.917508 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:22:18.917514 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:22:18.917520 | orchestrator | 2025-06-02 20:22:18.917526 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-02 20:22:18.917532 | orchestrator | Monday 02 June 2025 20:18:47 +0000 (0:00:06.447) 0:05:32.271 *********** 2025-06-02 20:22:18.917539 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:22:18.917545 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:22:18.917555 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:22:18.917561 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:22:18.917567 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:22:18.917573 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 20:22:18.917580 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:22:18.917586 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 20:22:18.917592 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 20:22:18.917598 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:22:18.917604 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:22:18.917610 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:22:18.917616 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:22:18.917622 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:22:18.917632 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:22:18.917639 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 20:22:18.917645 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.917651 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 20:22:18.917657 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.917663 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 20:22:18.917669 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.917675 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:22:18.917681 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:22:18.917688 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:22:18.917697 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:22:18.917703 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:22:18.917709 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:22:18.917715 | orchestrator | 2025-06-02 20:22:18.917721 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-02 20:22:18.917728 | orchestrator | Monday 02 June 2025 20:18:55 +0000 (0:00:08.462) 0:05:40.733 *********** 2025-06-02 20:22:18.917734 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.917740 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.917746 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.917752 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.917758 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.917764 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.917770 | orchestrator | 2025-06-02 20:22:18.917776 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-02 20:22:18.917782 | orchestrator | Monday 02 June 2025 20:18:56 +0000 (0:00:00.505) 0:05:41.238 *********** 2025-06-02 20:22:18.917788 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.917794 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.917800 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.917806 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.917813 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.917819 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.917825 | orchestrator | 2025-06-02 20:22:18.917831 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-02 20:22:18.917837 | orchestrator | Monday 02 June 2025 20:18:57 +0000 (0:00:00.737) 0:05:41.976 *********** 2025-06-02 20:22:18.917843 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.917849 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.917855 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.917861 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.917867 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.917873 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.917880 | orchestrator | 2025-06-02 20:22:18.917886 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-02 20:22:18.917892 | orchestrator | Monday 02 June 2025 20:18:59 +0000 (0:00:02.378) 0:05:44.354 *********** 2025-06-02 20:22:18.917903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.917914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.917921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.917928 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.917940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.917947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.917954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.917964 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.917975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:22:18.917982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:22:18.917991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.917998 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.918004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:22:18.918011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.918059 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.918082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:22:18.918101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.918107 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.918114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:22:18.918121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:22:18.918127 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.918133 | orchestrator | 2025-06-02 20:22:18.918140 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-02 20:22:18.918146 | orchestrator | Monday 02 June 2025 20:19:02 +0000 (0:00:02.585) 0:05:46.940 *********** 2025-06-02 20:22:18.918156 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 20:22:18.918163 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 20:22:18.918169 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.918175 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 20:22:18.918181 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 20:22:18.918188 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.918194 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 20:22:18.918200 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 20:22:18.918206 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.918212 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 20:22:18.918219 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 20:22:18.918225 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.918231 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 20:22:18.918237 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 20:22:18.918243 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.918250 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 20:22:18.918260 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 20:22:18.918267 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.918273 | orchestrator | 2025-06-02 20:22:18.918279 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-02 20:22:18.918285 | orchestrator | Monday 02 June 2025 20:19:02 +0000 (0:00:00.686) 0:05:47.626 *********** 2025-06-02 20:22:18.918292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918338 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:22:18.918419 | orchestrator | 2025-06-02 20:22:18.918426 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 20:22:18.918432 | orchestrator | Monday 02 June 2025 20:19:07 +0000 (0:00:04.971) 0:05:52.597 *********** 2025-06-02 20:22:18.918438 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.918445 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.918451 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.918457 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.918463 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.918470 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.918476 | orchestrator | 2025-06-02 20:22:18.918482 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:22:18.918488 | orchestrator | Monday 02 June 2025 20:19:08 +0000 (0:00:00.619) 0:05:53.217 *********** 2025-06-02 20:22:18.918494 | orchestrator | 2025-06-02 20:22:18.918501 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:22:18.918507 | orchestrator | Monday 02 June 2025 20:19:08 +0000 (0:00:00.235) 0:05:53.453 *********** 2025-06-02 20:22:18.918519 | orchestrator | 2025-06-02 20:22:18.918525 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:22:18.918531 | orchestrator | Monday 02 June 2025 20:19:08 +0000 (0:00:00.117) 0:05:53.571 *********** 2025-06-02 20:22:18.918537 | orchestrator | 2025-06-02 20:22:18.918547 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:22:18.918553 | orchestrator | Monday 02 June 2025 20:19:08 +0000 (0:00:00.117) 0:05:53.688 *********** 2025-06-02 20:22:18.918560 | orchestrator | 2025-06-02 20:22:18.918566 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:22:18.918572 | orchestrator | Monday 02 June 2025 20:19:08 +0000 (0:00:00.116) 0:05:53.804 *********** 2025-06-02 20:22:18.918578 | orchestrator | 2025-06-02 20:22:18.918584 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:22:18.918590 | orchestrator | Monday 02 June 2025 20:19:08 +0000 (0:00:00.116) 0:05:53.921 *********** 2025-06-02 20:22:18.918597 | orchestrator | 2025-06-02 20:22:18.918603 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-02 20:22:18.918609 | orchestrator | Monday 02 June 2025 20:19:09 +0000 (0:00:00.123) 0:05:54.045 *********** 2025-06-02 20:22:18.918615 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:22:18.918621 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.918628 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:22:18.918634 | orchestrator | 2025-06-02 20:22:18.918640 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-02 20:22:18.918646 | orchestrator | Monday 02 June 2025 20:19:24 +0000 (0:00:15.563) 0:06:09.608 *********** 2025-06-02 20:22:18.918652 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.918659 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:22:18.918665 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:22:18.918671 | orchestrator | 2025-06-02 20:22:18.918677 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-02 20:22:18.918683 | orchestrator | Monday 02 June 2025 20:19:37 +0000 (0:00:12.850) 0:06:22.459 *********** 2025-06-02 20:22:18.918690 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.918696 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.918702 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.918708 | orchestrator | 2025-06-02 20:22:18.918714 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-02 20:22:18.918720 | orchestrator | Monday 02 June 2025 20:19:59 +0000 (0:00:22.252) 0:06:44.711 *********** 2025-06-02 20:22:18.918727 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.918733 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.918739 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.918745 | orchestrator | 2025-06-02 20:22:18.918751 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-02 20:22:18.918758 | orchestrator | Monday 02 June 2025 20:20:36 +0000 (0:00:36.916) 0:07:21.627 *********** 2025-06-02 20:22:18.918764 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-06-02 20:22:18.918770 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.918776 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-06-02 20:22:18.918783 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.918789 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.918795 | orchestrator | 2025-06-02 20:22:18.918801 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-02 20:22:18.918808 | orchestrator | Monday 02 June 2025 20:20:43 +0000 (0:00:06.501) 0:07:28.129 *********** 2025-06-02 20:22:18.918817 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.918824 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.918830 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.918836 | orchestrator | 2025-06-02 20:22:18.918842 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-02 20:22:18.918853 | orchestrator | Monday 02 June 2025 20:20:44 +0000 (0:00:00.865) 0:07:28.995 *********** 2025-06-02 20:22:18.918859 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:22:18.918865 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:22:18.918872 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:22:18.918878 | orchestrator | 2025-06-02 20:22:18.918884 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-02 20:22:18.918890 | orchestrator | Monday 02 June 2025 20:21:08 +0000 (0:00:24.335) 0:07:53.331 *********** 2025-06-02 20:22:18.918896 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.918903 | orchestrator | 2025-06-02 20:22:18.918909 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-02 20:22:18.918915 | orchestrator | Monday 02 June 2025 20:21:08 +0000 (0:00:00.149) 0:07:53.480 *********** 2025-06-02 20:22:18.918921 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.918927 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.918934 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.918940 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.918946 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.918952 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-02 20:22:18.918959 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:22:18.918965 | orchestrator | 2025-06-02 20:22:18.918972 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-02 20:22:18.918978 | orchestrator | Monday 02 June 2025 20:21:31 +0000 (0:00:22.689) 0:08:16.169 *********** 2025-06-02 20:22:18.918984 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.918990 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.918996 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.919003 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.919009 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.919015 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.919021 | orchestrator | 2025-06-02 20:22:18.919027 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-02 20:22:18.919034 | orchestrator | Monday 02 June 2025 20:21:39 +0000 (0:00:08.584) 0:08:24.754 *********** 2025-06-02 20:22:18.919040 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.919046 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.919052 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.919058 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.919087 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.919094 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-06-02 20:22:18.919100 | orchestrator | 2025-06-02 20:22:18.919106 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 20:22:18.919112 | orchestrator | Monday 02 June 2025 20:21:43 +0000 (0:00:03.900) 0:08:28.654 *********** 2025-06-02 20:22:18.919118 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:22:18.919125 | orchestrator | 2025-06-02 20:22:18.919131 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 20:22:18.919137 | orchestrator | Monday 02 June 2025 20:21:56 +0000 (0:00:12.322) 0:08:40.977 *********** 2025-06-02 20:22:18.919143 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:22:18.919149 | orchestrator | 2025-06-02 20:22:18.919156 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-02 20:22:18.919162 | orchestrator | Monday 02 June 2025 20:21:57 +0000 (0:00:01.300) 0:08:42.278 *********** 2025-06-02 20:22:18.919168 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.919174 | orchestrator | 2025-06-02 20:22:18.919180 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-02 20:22:18.919187 | orchestrator | Monday 02 June 2025 20:21:58 +0000 (0:00:01.320) 0:08:43.598 *********** 2025-06-02 20:22:18.919197 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:22:18.919203 | orchestrator | 2025-06-02 20:22:18.919209 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-02 20:22:18.919215 | orchestrator | Monday 02 June 2025 20:22:09 +0000 (0:00:10.959) 0:08:54.557 *********** 2025-06-02 20:22:18.919221 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:22:18.919228 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:22:18.919234 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:22:18.919240 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:22:18.919246 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:22:18.919253 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:22:18.919259 | orchestrator | 2025-06-02 20:22:18.919265 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-02 20:22:18.919271 | orchestrator | 2025-06-02 20:22:18.919278 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-02 20:22:18.919284 | orchestrator | Monday 02 June 2025 20:22:11 +0000 (0:00:01.820) 0:08:56.378 *********** 2025-06-02 20:22:18.919290 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:22:18.919297 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:22:18.919303 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:22:18.919309 | orchestrator | 2025-06-02 20:22:18.919315 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-02 20:22:18.919321 | orchestrator | 2025-06-02 20:22:18.919328 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-02 20:22:18.919334 | orchestrator | Monday 02 June 2025 20:22:12 +0000 (0:00:01.226) 0:08:57.604 *********** 2025-06-02 20:22:18.919340 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.919346 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.919352 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.919359 | orchestrator | 2025-06-02 20:22:18.919365 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-02 20:22:18.919371 | orchestrator | 2025-06-02 20:22:18.919381 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-02 20:22:18.919388 | orchestrator | Monday 02 June 2025 20:22:13 +0000 (0:00:00.576) 0:08:58.181 *********** 2025-06-02 20:22:18.919394 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-02 20:22:18.919400 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 20:22:18.919406 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 20:22:18.919412 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-02 20:22:18.919419 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-02 20:22:18.919425 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-02 20:22:18.919431 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:22:18.919437 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-02 20:22:18.919444 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 20:22:18.919450 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 20:22:18.919456 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-02 20:22:18.919462 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-02 20:22:18.919469 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-02 20:22:18.919475 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:22:18.919481 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-02 20:22:18.919487 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 20:22:18.919493 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 20:22:18.919500 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-02 20:22:18.919506 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-02 20:22:18.919512 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-02 20:22:18.919525 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:22:18.919531 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-02 20:22:18.919537 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 20:22:18.919543 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 20:22:18.919550 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-02 20:22:18.919556 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-02 20:22:18.919562 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-02 20:22:18.919568 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.919578 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-02 20:22:18.919584 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 20:22:18.919590 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 20:22:18.919597 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-02 20:22:18.919603 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-02 20:22:18.919609 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-02 20:22:18.919615 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.919621 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-02 20:22:18.919627 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 20:22:18.919634 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 20:22:18.919640 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-02 20:22:18.919646 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-02 20:22:18.919652 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-02 20:22:18.919658 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.919665 | orchestrator | 2025-06-02 20:22:18.919671 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-02 20:22:18.919677 | orchestrator | 2025-06-02 20:22:18.919683 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-02 20:22:18.919689 | orchestrator | Monday 02 June 2025 20:22:14 +0000 (0:00:01.415) 0:08:59.597 *********** 2025-06-02 20:22:18.919696 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-02 20:22:18.919702 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-02 20:22:18.919708 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.919714 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-02 20:22:18.919720 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-02 20:22:18.919726 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.919733 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-02 20:22:18.919739 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-02 20:22:18.919745 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.919751 | orchestrator | 2025-06-02 20:22:18.919757 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-02 20:22:18.919764 | orchestrator | 2025-06-02 20:22:18.919770 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-02 20:22:18.919776 | orchestrator | Monday 02 June 2025 20:22:15 +0000 (0:00:00.775) 0:09:00.373 *********** 2025-06-02 20:22:18.919782 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.919788 | orchestrator | 2025-06-02 20:22:18.919794 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-02 20:22:18.919800 | orchestrator | 2025-06-02 20:22:18.919807 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-02 20:22:18.919813 | orchestrator | Monday 02 June 2025 20:22:16 +0000 (0:00:00.656) 0:09:01.029 *********** 2025-06-02 20:22:18.919819 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:22:18.919830 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:22:18.919836 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:22:18.919842 | orchestrator | 2025-06-02 20:22:18.919851 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:22:18.919858 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:22:18.919864 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-02 20:22:18.919871 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 20:22:18.919878 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 20:22:18.919884 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 20:22:18.919890 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-02 20:22:18.919897 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-02 20:22:18.919903 | orchestrator | 2025-06-02 20:22:18.919909 | orchestrator | 2025-06-02 20:22:18.919915 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:22:18.919922 | orchestrator | Monday 02 June 2025 20:22:16 +0000 (0:00:00.415) 0:09:01.445 *********** 2025-06-02 20:22:18.919928 | orchestrator | =============================================================================== 2025-06-02 20:22:18.919934 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.92s 2025-06-02 20:22:18.919940 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.97s 2025-06-02 20:22:18.919947 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.34s 2025-06-02 20:22:18.919953 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.69s 2025-06-02 20:22:18.919959 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.25s 2025-06-02 20:22:18.919969 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.53s 2025-06-02 20:22:18.919975 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.12s 2025-06-02 20:22:18.919981 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.18s 2025-06-02 20:22:18.919987 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.81s 2025-06-02 20:22:18.919994 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 15.56s 2025-06-02 20:22:18.920000 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.47s 2025-06-02 20:22:18.920007 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.85s 2025-06-02 20:22:18.920013 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.57s 2025-06-02 20:22:18.920019 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.32s 2025-06-02 20:22:18.920025 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.81s 2025-06-02 20:22:18.920031 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.01s 2025-06-02 20:22:18.920037 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.96s 2025-06-02 20:22:18.920044 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.30s 2025-06-02 20:22:18.920050 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.58s 2025-06-02 20:22:18.920056 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.46s 2025-06-02 20:22:18.920106 | orchestrator | 2025-06-02 20:22:18 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:18.920115 | orchestrator | 2025-06-02 20:22:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:21.951760 | orchestrator | 2025-06-02 20:22:21 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:21.951842 | orchestrator | 2025-06-02 20:22:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:25.003897 | orchestrator | 2025-06-02 20:22:25 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:25.003982 | orchestrator | 2025-06-02 20:22:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:28.046278 | orchestrator | 2025-06-02 20:22:28 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:28.046380 | orchestrator | 2025-06-02 20:22:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:31.091625 | orchestrator | 2025-06-02 20:22:31 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:31.091754 | orchestrator | 2025-06-02 20:22:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:34.125140 | orchestrator | 2025-06-02 20:22:34 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:34.125270 | orchestrator | 2025-06-02 20:22:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:37.166917 | orchestrator | 2025-06-02 20:22:37 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:37.167006 | orchestrator | 2025-06-02 20:22:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:40.205419 | orchestrator | 2025-06-02 20:22:40 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:40.205528 | orchestrator | 2025-06-02 20:22:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:43.248108 | orchestrator | 2025-06-02 20:22:43 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:43.248195 | orchestrator | 2025-06-02 20:22:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:46.301491 | orchestrator | 2025-06-02 20:22:46 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:46.301575 | orchestrator | 2025-06-02 20:22:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:49.340838 | orchestrator | 2025-06-02 20:22:49 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:49.341045 | orchestrator | 2025-06-02 20:22:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:52.387563 | orchestrator | 2025-06-02 20:22:52 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:52.387696 | orchestrator | 2025-06-02 20:22:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:55.431844 | orchestrator | 2025-06-02 20:22:55 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:55.431916 | orchestrator | 2025-06-02 20:22:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:58.477724 | orchestrator | 2025-06-02 20:22:58 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:22:58.477880 | orchestrator | 2025-06-02 20:22:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:01.520048 | orchestrator | 2025-06-02 20:23:01 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:01.520172 | orchestrator | 2025-06-02 20:23:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:04.561653 | orchestrator | 2025-06-02 20:23:04 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:04.561767 | orchestrator | 2025-06-02 20:23:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:07.594997 | orchestrator | 2025-06-02 20:23:07 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:07.595071 | orchestrator | 2025-06-02 20:23:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:10.642262 | orchestrator | 2025-06-02 20:23:10 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:10.642369 | orchestrator | 2025-06-02 20:23:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:13.684163 | orchestrator | 2025-06-02 20:23:13 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:13.684249 | orchestrator | 2025-06-02 20:23:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:16.725183 | orchestrator | 2025-06-02 20:23:16 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:16.725320 | orchestrator | 2025-06-02 20:23:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:19.763517 | orchestrator | 2025-06-02 20:23:19 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:19.763614 | orchestrator | 2025-06-02 20:23:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:22.803149 | orchestrator | 2025-06-02 20:23:22 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:22.803238 | orchestrator | 2025-06-02 20:23:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:25.848066 | orchestrator | 2025-06-02 20:23:25 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:25.848168 | orchestrator | 2025-06-02 20:23:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:28.896013 | orchestrator | 2025-06-02 20:23:28 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:28.896125 | orchestrator | 2025-06-02 20:23:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:31.935825 | orchestrator | 2025-06-02 20:23:31 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:31.936016 | orchestrator | 2025-06-02 20:23:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:34.980329 | orchestrator | 2025-06-02 20:23:34 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:34.980420 | orchestrator | 2025-06-02 20:23:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:38.026786 | orchestrator | 2025-06-02 20:23:38 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:38.026904 | orchestrator | 2025-06-02 20:23:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:41.063662 | orchestrator | 2025-06-02 20:23:41 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:41.063767 | orchestrator | 2025-06-02 20:23:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:44.103194 | orchestrator | 2025-06-02 20:23:44 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:44.103969 | orchestrator | 2025-06-02 20:23:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:47.143654 | orchestrator | 2025-06-02 20:23:47 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:47.143783 | orchestrator | 2025-06-02 20:23:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:50.184992 | orchestrator | 2025-06-02 20:23:50 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:50.185074 | orchestrator | 2025-06-02 20:23:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:53.226329 | orchestrator | 2025-06-02 20:23:53 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:53.226433 | orchestrator | 2025-06-02 20:23:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:56.265672 | orchestrator | 2025-06-02 20:23:56 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:56.265791 | orchestrator | 2025-06-02 20:23:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:59.312706 | orchestrator | 2025-06-02 20:23:59 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:23:59.312815 | orchestrator | 2025-06-02 20:23:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:02.360695 | orchestrator | 2025-06-02 20:24:02 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:02.360797 | orchestrator | 2025-06-02 20:24:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:05.405053 | orchestrator | 2025-06-02 20:24:05 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:05.405129 | orchestrator | 2025-06-02 20:24:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:08.443956 | orchestrator | 2025-06-02 20:24:08 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:08.444038 | orchestrator | 2025-06-02 20:24:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:11.488860 | orchestrator | 2025-06-02 20:24:11 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:11.489080 | orchestrator | 2025-06-02 20:24:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:14.532774 | orchestrator | 2025-06-02 20:24:14 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:14.532907 | orchestrator | 2025-06-02 20:24:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:17.577820 | orchestrator | 2025-06-02 20:24:17 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:17.577962 | orchestrator | 2025-06-02 20:24:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:20.616805 | orchestrator | 2025-06-02 20:24:20 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:20.616961 | orchestrator | 2025-06-02 20:24:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:23.661980 | orchestrator | 2025-06-02 20:24:23 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:23.662154 | orchestrator | 2025-06-02 20:24:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:26.710169 | orchestrator | 2025-06-02 20:24:26 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:26.710296 | orchestrator | 2025-06-02 20:24:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:29.761580 | orchestrator | 2025-06-02 20:24:29 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:29.763763 | orchestrator | 2025-06-02 20:24:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:32.807414 | orchestrator | 2025-06-02 20:24:32 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:32.807492 | orchestrator | 2025-06-02 20:24:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:35.856783 | orchestrator | 2025-06-02 20:24:35 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:35.856892 | orchestrator | 2025-06-02 20:24:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:38.900236 | orchestrator | 2025-06-02 20:24:38 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:38.900339 | orchestrator | 2025-06-02 20:24:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:41.946973 | orchestrator | 2025-06-02 20:24:41 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:41.947064 | orchestrator | 2025-06-02 20:24:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:44.992214 | orchestrator | 2025-06-02 20:24:44 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:44.992287 | orchestrator | 2025-06-02 20:24:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:48.040469 | orchestrator | 2025-06-02 20:24:48 | INFO  | Task a7a28115-5b10-4b68-8d19-44276224b534 is in state STARTED 2025-06-02 20:24:48.047315 | orchestrator | 2025-06-02 20:24:48 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:48.047897 | orchestrator | 2025-06-02 20:24:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:51.105921 | orchestrator | 2025-06-02 20:24:51 | INFO  | Task a7a28115-5b10-4b68-8d19-44276224b534 is in state STARTED 2025-06-02 20:24:51.107432 | orchestrator | 2025-06-02 20:24:51 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:51.108036 | orchestrator | 2025-06-02 20:24:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:54.157603 | orchestrator | 2025-06-02 20:24:54 | INFO  | Task a7a28115-5b10-4b68-8d19-44276224b534 is in state STARTED 2025-06-02 20:24:54.159262 | orchestrator | 2025-06-02 20:24:54 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state STARTED 2025-06-02 20:24:54.160984 | orchestrator | 2025-06-02 20:24:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:24:57.207707 | orchestrator | 2025-06-02 20:24:57 | INFO  | Task a7a28115-5b10-4b68-8d19-44276224b534 is in state STARTED 2025-06-02 20:24:57.208551 | orchestrator | 2025-06-02 20:24:57 | INFO  | Task 2c75cf0d-667c-4f2e-9d51-d5b38ac4dbb9 is in state SUCCESS 2025-06-02 20:24:57.210913 | orchestrator | 2025-06-02 20:24:57.210990 | orchestrator | 2025-06-02 20:24:57.211664 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:24:57.212177 | orchestrator | 2025-06-02 20:24:57.212198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:24:57.212209 | orchestrator | Monday 02 June 2025 20:20:07 +0000 (0:00:00.253) 0:00:00.253 *********** 2025-06-02 20:24:57.212221 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:24:57.212233 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:24:57.212244 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:24:57.212255 | orchestrator | 2025-06-02 20:24:57.212266 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:24:57.212277 | orchestrator | Monday 02 June 2025 20:20:07 +0000 (0:00:00.304) 0:00:00.557 *********** 2025-06-02 20:24:57.212288 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-02 20:24:57.212299 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-02 20:24:57.212310 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-02 20:24:57.212320 | orchestrator | 2025-06-02 20:24:57.212331 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-02 20:24:57.212342 | orchestrator | 2025-06-02 20:24:57.212353 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 20:24:57.212422 | orchestrator | Monday 02 June 2025 20:20:08 +0000 (0:00:00.384) 0:00:00.941 *********** 2025-06-02 20:24:57.212435 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:24:57.212447 | orchestrator | 2025-06-02 20:24:57.212458 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-02 20:24:57.212468 | orchestrator | Monday 02 June 2025 20:20:08 +0000 (0:00:00.534) 0:00:01.476 *********** 2025-06-02 20:24:57.212480 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-02 20:24:57.212490 | orchestrator | 2025-06-02 20:24:57.212501 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-02 20:24:57.212512 | orchestrator | Monday 02 June 2025 20:20:12 +0000 (0:00:03.626) 0:00:05.103 *********** 2025-06-02 20:24:57.212523 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-02 20:24:57.212534 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-02 20:24:57.212545 | orchestrator | 2025-06-02 20:24:57.212556 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-02 20:24:57.212567 | orchestrator | Monday 02 June 2025 20:20:19 +0000 (0:00:06.808) 0:00:11.911 *********** 2025-06-02 20:24:57.212578 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:24:57.212588 | orchestrator | 2025-06-02 20:24:57.212599 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-02 20:24:57.212610 | orchestrator | Monday 02 June 2025 20:20:22 +0000 (0:00:03.398) 0:00:15.310 *********** 2025-06-02 20:24:57.212621 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:24:57.212631 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 20:24:57.212642 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 20:24:57.212653 | orchestrator | 2025-06-02 20:24:57.212663 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-02 20:24:57.212674 | orchestrator | Monday 02 June 2025 20:20:30 +0000 (0:00:08.456) 0:00:23.767 *********** 2025-06-02 20:24:57.212684 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:24:57.212695 | orchestrator | 2025-06-02 20:24:57.212706 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-02 20:24:57.212716 | orchestrator | Monday 02 June 2025 20:20:34 +0000 (0:00:03.562) 0:00:27.329 *********** 2025-06-02 20:24:57.212727 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 20:24:57.212738 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 20:24:57.212748 | orchestrator | 2025-06-02 20:24:57.212759 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-02 20:24:57.212770 | orchestrator | Monday 02 June 2025 20:20:42 +0000 (0:00:07.837) 0:00:35.167 *********** 2025-06-02 20:24:57.212783 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-02 20:24:57.212795 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-02 20:24:57.212825 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-02 20:24:57.212838 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-02 20:24:57.212850 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-02 20:24:57.212863 | orchestrator | 2025-06-02 20:24:57.212875 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 20:24:57.212902 | orchestrator | Monday 02 June 2025 20:20:58 +0000 (0:00:16.397) 0:00:51.565 *********** 2025-06-02 20:24:57.212914 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:24:57.212927 | orchestrator | 2025-06-02 20:24:57.212940 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-02 20:24:57.212961 | orchestrator | Monday 02 June 2025 20:20:59 +0000 (0:00:00.577) 0:00:52.142 *********** 2025-06-02 20:24:57.212973 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.212986 | orchestrator | 2025-06-02 20:24:57.212998 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-06-02 20:24:57.213011 | orchestrator | Monday 02 June 2025 20:21:04 +0000 (0:00:05.397) 0:00:57.540 *********** 2025-06-02 20:24:57.213024 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.213036 | orchestrator | 2025-06-02 20:24:57.213049 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-02 20:24:57.213112 | orchestrator | Monday 02 June 2025 20:21:09 +0000 (0:00:04.585) 0:01:02.125 *********** 2025-06-02 20:24:57.213126 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:24:57.213139 | orchestrator | 2025-06-02 20:24:57.213151 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-06-02 20:24:57.213162 | orchestrator | Monday 02 June 2025 20:21:12 +0000 (0:00:03.580) 0:01:05.706 *********** 2025-06-02 20:24:57.213173 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-02 20:24:57.213183 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-02 20:24:57.213194 | orchestrator | 2025-06-02 20:24:57.213205 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-06-02 20:24:57.213215 | orchestrator | Monday 02 June 2025 20:21:23 +0000 (0:00:10.865) 0:01:16.572 *********** 2025-06-02 20:24:57.213226 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-06-02 20:24:57.213237 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-06-02 20:24:57.213250 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-06-02 20:24:57.213263 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-06-02 20:24:57.213273 | orchestrator | 2025-06-02 20:24:57.213284 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-06-02 20:24:57.213295 | orchestrator | Monday 02 June 2025 20:21:41 +0000 (0:00:17.527) 0:01:34.100 *********** 2025-06-02 20:24:57.213305 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.213316 | orchestrator | 2025-06-02 20:24:57.213327 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-06-02 20:24:57.213337 | orchestrator | Monday 02 June 2025 20:21:46 +0000 (0:00:05.014) 0:01:39.114 *********** 2025-06-02 20:24:57.213348 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.213359 | orchestrator | 2025-06-02 20:24:57.213370 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-06-02 20:24:57.213380 | orchestrator | Monday 02 June 2025 20:21:52 +0000 (0:00:06.206) 0:01:45.321 *********** 2025-06-02 20:24:57.213391 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:24:57.213402 | orchestrator | 2025-06-02 20:24:57.213412 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-06-02 20:24:57.213423 | orchestrator | Monday 02 June 2025 20:21:52 +0000 (0:00:00.205) 0:01:45.526 *********** 2025-06-02 20:24:57.213434 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.213445 | orchestrator | 2025-06-02 20:24:57.213455 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 20:24:57.213466 | orchestrator | Monday 02 June 2025 20:21:57 +0000 (0:00:04.639) 0:01:50.165 *********** 2025-06-02 20:24:57.213477 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:24:57.213488 | orchestrator | 2025-06-02 20:24:57.213499 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-06-02 20:24:57.213509 | orchestrator | Monday 02 June 2025 20:21:58 +0000 (0:00:01.216) 0:01:51.382 *********** 2025-06-02 20:24:57.213527 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.213538 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.213548 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.213559 | orchestrator | 2025-06-02 20:24:57.213570 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-06-02 20:24:57.213580 | orchestrator | Monday 02 June 2025 20:22:04 +0000 (0:00:05.909) 0:01:57.292 *********** 2025-06-02 20:24:57.213591 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.213602 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.213612 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.213623 | orchestrator | 2025-06-02 20:24:57.213634 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-06-02 20:24:57.213644 | orchestrator | Monday 02 June 2025 20:22:09 +0000 (0:00:05.364) 0:02:02.656 *********** 2025-06-02 20:24:57.213655 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.213665 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.213676 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.213686 | orchestrator | 2025-06-02 20:24:57.213703 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-06-02 20:24:57.213722 | orchestrator | Monday 02 June 2025 20:22:10 +0000 (0:00:00.805) 0:02:03.461 *********** 2025-06-02 20:24:57.213745 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:24:57.213772 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:24:57.213790 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:24:57.213924 | orchestrator | 2025-06-02 20:24:57.213944 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-06-02 20:24:57.213961 | orchestrator | Monday 02 June 2025 20:22:12 +0000 (0:00:02.121) 0:02:05.582 *********** 2025-06-02 20:24:57.213988 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.214004 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.214095 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.214116 | orchestrator | 2025-06-02 20:24:57.214134 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-06-02 20:24:57.214151 | orchestrator | Monday 02 June 2025 20:22:14 +0000 (0:00:01.343) 0:02:06.926 *********** 2025-06-02 20:24:57.214169 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.214187 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.214205 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.214224 | orchestrator | 2025-06-02 20:24:57.214241 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-06-02 20:24:57.214259 | orchestrator | Monday 02 June 2025 20:22:15 +0000 (0:00:01.251) 0:02:08.177 *********** 2025-06-02 20:24:57.214278 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.214295 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.214312 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.214328 | orchestrator | 2025-06-02 20:24:57.214415 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-06-02 20:24:57.214433 | orchestrator | Monday 02 June 2025 20:22:17 +0000 (0:00:01.990) 0:02:10.168 *********** 2025-06-02 20:24:57.214448 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.214463 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.214477 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.214493 | orchestrator | 2025-06-02 20:24:57.214510 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-06-02 20:24:57.214527 | orchestrator | Monday 02 June 2025 20:22:19 +0000 (0:00:01.848) 0:02:12.017 *********** 2025-06-02 20:24:57.214544 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:24:57.214561 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:24:57.214577 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:24:57.214594 | orchestrator | 2025-06-02 20:24:57.214612 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-06-02 20:24:57.214629 | orchestrator | Monday 02 June 2025 20:22:19 +0000 (0:00:00.627) 0:02:12.644 *********** 2025-06-02 20:24:57.214647 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:24:57.214681 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:24:57.214699 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:24:57.214716 | orchestrator | 2025-06-02 20:24:57.214733 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 20:24:57.214750 | orchestrator | Monday 02 June 2025 20:22:22 +0000 (0:00:02.816) 0:02:15.461 *********** 2025-06-02 20:24:57.214766 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:24:57.214784 | orchestrator | 2025-06-02 20:24:57.214831 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-06-02 20:24:57.214850 | orchestrator | Monday 02 June 2025 20:22:23 +0000 (0:00:00.708) 0:02:16.169 *********** 2025-06-02 20:24:57.214867 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:24:57.214883 | orchestrator | 2025-06-02 20:24:57.214898 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-02 20:24:57.214913 | orchestrator | Monday 02 June 2025 20:22:27 +0000 (0:00:04.032) 0:02:20.201 *********** 2025-06-02 20:24:57.214929 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:24:57.214946 | orchestrator | 2025-06-02 20:24:57.214962 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-06-02 20:24:57.214978 | orchestrator | Monday 02 June 2025 20:22:30 +0000 (0:00:03.102) 0:02:23.304 *********** 2025-06-02 20:24:57.214994 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-02 20:24:57.215011 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-02 20:24:57.215027 | orchestrator | 2025-06-02 20:24:57.215043 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-06-02 20:24:57.215060 | orchestrator | Monday 02 June 2025 20:22:37 +0000 (0:00:07.073) 0:02:30.378 *********** 2025-06-02 20:24:57.215075 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:24:57.215091 | orchestrator | 2025-06-02 20:24:57.215106 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-06-02 20:24:57.215120 | orchestrator | Monday 02 June 2025 20:22:40 +0000 (0:00:03.436) 0:02:33.814 *********** 2025-06-02 20:24:57.215135 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:24:57.215149 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:24:57.215163 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:24:57.215177 | orchestrator | 2025-06-02 20:24:57.215192 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-06-02 20:24:57.215208 | orchestrator | Monday 02 June 2025 20:22:41 +0000 (0:00:00.337) 0:02:34.152 *********** 2025-06-02 20:24:57.215228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.215323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.215361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.215385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.215403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.215419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.215436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.215459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.215535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.215557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.215573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.215590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.215602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.215613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.215628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.215645 | orchestrator | 2025-06-02 20:24:57.215656 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-06-02 20:24:57.215666 | orchestrator | Monday 02 June 2025 20:22:43 +0000 (0:00:02.615) 0:02:36.767 *********** 2025-06-02 20:24:57.215676 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:24:57.215686 | orchestrator | 2025-06-02 20:24:57.215738 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-06-02 20:24:57.215752 | orchestrator | Monday 02 June 2025 20:22:44 +0000 (0:00:00.307) 0:02:37.075 *********** 2025-06-02 20:24:57.215761 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:24:57.215771 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:24:57.215781 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:24:57.215790 | orchestrator | 2025-06-02 20:24:57.215826 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-06-02 20:24:57.215836 | orchestrator | Monday 02 June 2025 20:22:44 +0000 (0:00:00.313) 0:02:37.388 *********** 2025-06-02 20:24:57.215848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:24:57.215859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:24:57.215871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.215882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.215905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:24:57.215916 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:24:57.215958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:24:57.215970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:24:57.215980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.215990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:24:57.216023 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:24:57.216038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:24:57.216078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:24:57.216090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:24:57.216120 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:24:57.216132 | orchestrator | 2025-06-02 20:24:57.216149 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 20:24:57.216165 | orchestrator | Monday 02 June 2025 20:22:45 +0000 (0:00:00.691) 0:02:38.080 *********** 2025-06-02 20:24:57.216181 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:24:57.216209 | orchestrator | 2025-06-02 20:24:57.216225 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-06-02 20:24:57.216242 | orchestrator | Monday 02 June 2025 20:22:45 +0000 (0:00:00.530) 0:02:38.611 *********** 2025-06-02 20:24:57.216259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.216303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.216316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.216326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.216336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.216354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.216364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.216379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.216395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.216406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.216417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.216427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.216443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.216458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.216477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.216488 | orchestrator | 2025-06-02 20:24:57.216498 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-06-02 20:24:57.216508 | orchestrator | Monday 02 June 2025 20:22:51 +0000 (0:00:05.355) 0:02:43.966 *********** 2025-06-02 20:24:57.216518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:24:57.216528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:24:57.216538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:24:57.216579 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:24:57.216594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:24:57.216605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:24:57.216615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:24:57.216651 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:24:57.216666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:24:57.216681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:24:57.216692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:24:57.216731 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:24:57.216742 | orchestrator | 2025-06-02 20:24:57.216751 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-06-02 20:24:57.216761 | orchestrator | Monday 02 June 2025 20:22:51 +0000 (0:00:00.648) 0:02:44.615 *********** 2025-06-02 20:24:57.216771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:24:57.216786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:24:57.216797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:24:57.216877 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:24:57.216887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:24:57.216897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:24:57.216907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:24:57.216952 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:24:57.216963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:24:57.216979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:24:57.216989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.216999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:24:57.217013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:24:57.217024 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:24:57.217033 | orchestrator | 2025-06-02 20:24:57.217043 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-06-02 20:24:57.217053 | orchestrator | Monday 02 June 2025 20:22:52 +0000 (0:00:00.859) 0:02:45.474 *********** 2025-06-02 20:24:57.217070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.217087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.217097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.217107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.217121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.217132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.217148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217264 | orchestrator | 2025-06-02 20:24:57.217274 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-06-02 20:24:57.217283 | orchestrator | Monday 02 June 2025 20:22:57 +0000 (0:00:05.377) 0:02:50.852 *********** 2025-06-02 20:24:57.217293 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 20:24:57.217304 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 20:24:57.217314 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 20:24:57.217323 | orchestrator | 2025-06-02 20:24:57.217333 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-06-02 20:24:57.217342 | orchestrator | Monday 02 June 2025 20:22:59 +0000 (0:00:01.597) 0:02:52.449 *********** 2025-06-02 20:24:57.217353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.217367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.217384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.217400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.217411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.217421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.217431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.217542 | orchestrator | 2025-06-02 20:24:57.217552 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-06-02 20:24:57.217567 | orchestrator | Monday 02 June 2025 20:23:14 +0000 (0:00:14.747) 0:03:07.196 *********** 2025-06-02 20:24:57.217577 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.217587 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.217596 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.217606 | orchestrator | 2025-06-02 20:24:57.217615 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-06-02 20:24:57.217625 | orchestrator | Monday 02 June 2025 20:23:15 +0000 (0:00:01.558) 0:03:08.754 *********** 2025-06-02 20:24:57.217634 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 20:24:57.217644 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 20:24:57.217658 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 20:24:57.217668 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 20:24:57.217678 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 20:24:57.217687 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 20:24:57.217696 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 20:24:57.217706 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 20:24:57.217715 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 20:24:57.217725 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 20:24:57.217734 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 20:24:57.217744 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 20:24:57.217753 | orchestrator | 2025-06-02 20:24:57.217763 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-06-02 20:24:57.217772 | orchestrator | Monday 02 June 2025 20:23:21 +0000 (0:00:05.303) 0:03:14.057 *********** 2025-06-02 20:24:57.217782 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 20:24:57.217791 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 20:24:57.217832 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 20:24:57.217842 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 20:24:57.217851 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 20:24:57.217861 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 20:24:57.217870 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 20:24:57.217880 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 20:24:57.217890 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 20:24:57.217899 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 20:24:57.217909 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 20:24:57.217918 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 20:24:57.217928 | orchestrator | 2025-06-02 20:24:57.217937 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-06-02 20:24:57.217947 | orchestrator | Monday 02 June 2025 20:23:26 +0000 (0:00:05.039) 0:03:19.097 *********** 2025-06-02 20:24:57.217956 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 20:24:57.217966 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 20:24:57.217975 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 20:24:57.217985 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 20:24:57.217994 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 20:24:57.218004 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 20:24:57.218048 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 20:24:57.218067 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 20:24:57.218077 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 20:24:57.218086 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 20:24:57.218096 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 20:24:57.218106 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 20:24:57.218115 | orchestrator | 2025-06-02 20:24:57.218125 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-06-02 20:24:57.218135 | orchestrator | Monday 02 June 2025 20:23:31 +0000 (0:00:05.197) 0:03:24.295 *********** 2025-06-02 20:24:57.218153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.218172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.218183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:24:57.218194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.218231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.218242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 20:24:57.218257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.218272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.218282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.218293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.218303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.218319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 20:24:57.218329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.218345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.218363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:24:57.218373 | orchestrator | 2025-06-02 20:24:57.218383 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 20:24:57.218393 | orchestrator | Monday 02 June 2025 20:23:35 +0000 (0:00:03.717) 0:03:28.012 *********** 2025-06-02 20:24:57.218403 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:24:57.218413 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:24:57.218423 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:24:57.218432 | orchestrator | 2025-06-02 20:24:57.218442 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-06-02 20:24:57.218451 | orchestrator | Monday 02 June 2025 20:23:35 +0000 (0:00:00.290) 0:03:28.303 *********** 2025-06-02 20:24:57.218461 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.218470 | orchestrator | 2025-06-02 20:24:57.218480 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-06-02 20:24:57.218490 | orchestrator | Monday 02 June 2025 20:23:37 +0000 (0:00:02.088) 0:03:30.391 *********** 2025-06-02 20:24:57.218499 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.218509 | orchestrator | 2025-06-02 20:24:57.218518 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-06-02 20:24:57.218528 | orchestrator | Monday 02 June 2025 20:23:39 +0000 (0:00:02.496) 0:03:32.888 *********** 2025-06-02 20:24:57.218537 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.218553 | orchestrator | 2025-06-02 20:24:57.218562 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-06-02 20:24:57.218572 | orchestrator | Monday 02 June 2025 20:23:42 +0000 (0:00:02.095) 0:03:34.984 *********** 2025-06-02 20:24:57.218581 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.218591 | orchestrator | 2025-06-02 20:24:57.218601 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-06-02 20:24:57.218610 | orchestrator | Monday 02 June 2025 20:23:44 +0000 (0:00:02.160) 0:03:37.145 *********** 2025-06-02 20:24:57.218620 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.218629 | orchestrator | 2025-06-02 20:24:57.218639 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 20:24:57.218648 | orchestrator | Monday 02 June 2025 20:24:04 +0000 (0:00:20.032) 0:03:57.178 *********** 2025-06-02 20:24:57.218658 | orchestrator | 2025-06-02 20:24:57.218668 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 20:24:57.218677 | orchestrator | Monday 02 June 2025 20:24:04 +0000 (0:00:00.066) 0:03:57.244 *********** 2025-06-02 20:24:57.218686 | orchestrator | 2025-06-02 20:24:57.218696 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 20:24:57.218706 | orchestrator | Monday 02 June 2025 20:24:04 +0000 (0:00:00.062) 0:03:57.307 *********** 2025-06-02 20:24:57.218715 | orchestrator | 2025-06-02 20:24:57.218725 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-06-02 20:24:57.218734 | orchestrator | Monday 02 June 2025 20:24:04 +0000 (0:00:00.068) 0:03:57.376 *********** 2025-06-02 20:24:57.218744 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.218753 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.218763 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.218772 | orchestrator | 2025-06-02 20:24:57.218782 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-06-02 20:24:57.218791 | orchestrator | Monday 02 June 2025 20:24:21 +0000 (0:00:16.725) 0:04:14.101 *********** 2025-06-02 20:24:57.218819 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.218829 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.218839 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.218848 | orchestrator | 2025-06-02 20:24:57.218858 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-06-02 20:24:57.218868 | orchestrator | Monday 02 June 2025 20:24:27 +0000 (0:00:06.720) 0:04:20.822 *********** 2025-06-02 20:24:57.218877 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.218887 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.218896 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.218906 | orchestrator | 2025-06-02 20:24:57.218915 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-06-02 20:24:57.218925 | orchestrator | Monday 02 June 2025 20:24:33 +0000 (0:00:05.692) 0:04:26.514 *********** 2025-06-02 20:24:57.218934 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.218944 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.218953 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.218963 | orchestrator | 2025-06-02 20:24:57.218972 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-06-02 20:24:57.218982 | orchestrator | Monday 02 June 2025 20:24:44 +0000 (0:00:10.774) 0:04:37.289 *********** 2025-06-02 20:24:57.218991 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:24:57.219001 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:24:57.219011 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:24:57.219020 | orchestrator | 2025-06-02 20:24:57.219030 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:24:57.219045 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:24:57.219055 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:24:57.219070 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:24:57.219080 | orchestrator | 2025-06-02 20:24:57.219090 | orchestrator | 2025-06-02 20:24:57.219099 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:24:57.219109 | orchestrator | Monday 02 June 2025 20:24:54 +0000 (0:00:10.602) 0:04:47.892 *********** 2025-06-02 20:24:57.219124 | orchestrator | =============================================================================== 2025-06-02 20:24:57.219134 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.03s 2025-06-02 20:24:57.219143 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.53s 2025-06-02 20:24:57.219153 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.73s 2025-06-02 20:24:57.219163 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.40s 2025-06-02 20:24:57.219172 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 14.75s 2025-06-02 20:24:57.219182 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.87s 2025-06-02 20:24:57.219191 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.77s 2025-06-02 20:24:57.219201 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.60s 2025-06-02 20:24:57.219210 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.46s 2025-06-02 20:24:57.219220 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.84s 2025-06-02 20:24:57.219229 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.07s 2025-06-02 20:24:57.219239 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.81s 2025-06-02 20:24:57.219248 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.72s 2025-06-02 20:24:57.219257 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.21s 2025-06-02 20:24:57.219267 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.91s 2025-06-02 20:24:57.219276 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.69s 2025-06-02 20:24:57.219286 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.40s 2025-06-02 20:24:57.219295 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.38s 2025-06-02 20:24:57.219305 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.36s 2025-06-02 20:24:57.219314 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.36s 2025-06-02 20:24:57.219324 | orchestrator | 2025-06-02 20:24:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:25:00.263434 | orchestrator | 2025-06-02 20:25:00 | INFO  | Task a7a28115-5b10-4b68-8d19-44276224b534 is in state STARTED 2025-06-02 20:25:00.263547 | orchestrator | 2025-06-02 20:25:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:25:03.318452 | orchestrator | 2025-06-02 20:25:03 | INFO  | Task a7a28115-5b10-4b68-8d19-44276224b534 is in state STARTED 2025-06-02 20:25:03.318533 | orchestrator | 2025-06-02 20:25:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:25:06.368463 | orchestrator | 2025-06-02 20:25:06 | INFO  | Task a7a28115-5b10-4b68-8d19-44276224b534 is in state SUCCESS 2025-06-02 20:25:06.368561 | orchestrator | 2025-06-02 20:25:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:09.405239 | orchestrator | 2025-06-02 20:25:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:12.448311 | orchestrator | 2025-06-02 20:25:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:15.486592 | orchestrator | 2025-06-02 20:25:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:18.528956 | orchestrator | 2025-06-02 20:25:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:21.571522 | orchestrator | 2025-06-02 20:25:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:24.616116 | orchestrator | 2025-06-02 20:25:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:27.654301 | orchestrator | 2025-06-02 20:25:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:30.697599 | orchestrator | 2025-06-02 20:25:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:33.737599 | orchestrator | 2025-06-02 20:25:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:36.776688 | orchestrator | 2025-06-02 20:25:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:39.820689 | orchestrator | 2025-06-02 20:25:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:42.862357 | orchestrator | 2025-06-02 20:25:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:45.902842 | orchestrator | 2025-06-02 20:25:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:48.947140 | orchestrator | 2025-06-02 20:25:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:51.985822 | orchestrator | 2025-06-02 20:25:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:55.031681 | orchestrator | 2025-06-02 20:25:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:25:58.071422 | orchestrator | 2025-06-02 20:25:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:26:01.107993 | orchestrator | 2025-06-02 20:26:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:26:04.151827 | orchestrator | 2025-06-02 20:26:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:26:07.190923 | orchestrator | 2025-06-02 20:26:07.191025 | orchestrator | None 2025-06-02 20:26:07.425831 | orchestrator | 2025-06-02 20:26:07.434251 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Jun 2 20:26:07 UTC 2025 2025-06-02 20:26:07.434330 | orchestrator | 2025-06-02 20:26:07.923530 | orchestrator | ok: Runtime: 0:33:18.722414 2025-06-02 20:26:08.195526 | 2025-06-02 20:26:08.195670 | TASK [Bootstrap services] 2025-06-02 20:26:08.985644 | orchestrator | 2025-06-02 20:26:08.985815 | orchestrator | # BOOTSTRAP 2025-06-02 20:26:08.985826 | orchestrator | 2025-06-02 20:26:08.985831 | orchestrator | + set -e 2025-06-02 20:26:08.985835 | orchestrator | + echo 2025-06-02 20:26:08.985842 | orchestrator | + echo '# BOOTSTRAP' 2025-06-02 20:26:08.985849 | orchestrator | + echo 2025-06-02 20:26:08.985869 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-02 20:26:08.994091 | orchestrator | + set -e 2025-06-02 20:26:08.994196 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-02 20:26:12.665863 | orchestrator | 2025-06-02 20:26:12 | INFO  | It takes a moment until task d6c0af74-ef50-4134-83c6-ea54fab60175 (flavor-manager) has been started and output is visible here. 2025-06-02 20:26:16.288379 | orchestrator | 2025-06-02 20:26:16 | INFO  | Flavor SCS-1V-4 created 2025-06-02 20:26:16.497843 | orchestrator | 2025-06-02 20:26:16 | INFO  | Flavor SCS-2V-8 created 2025-06-02 20:26:16.684547 | orchestrator | 2025-06-02 20:26:16 | INFO  | Flavor SCS-4V-16 created 2025-06-02 20:26:16.838403 | orchestrator | 2025-06-02 20:26:16 | INFO  | Flavor SCS-8V-32 created 2025-06-02 20:26:16.978073 | orchestrator | 2025-06-02 20:26:16 | INFO  | Flavor SCS-1V-2 created 2025-06-02 20:26:17.140809 | orchestrator | 2025-06-02 20:26:17 | INFO  | Flavor SCS-2V-4 created 2025-06-02 20:26:17.286324 | orchestrator | 2025-06-02 20:26:17 | INFO  | Flavor SCS-4V-8 created 2025-06-02 20:26:17.411939 | orchestrator | 2025-06-02 20:26:17 | INFO  | Flavor SCS-8V-16 created 2025-06-02 20:26:17.578529 | orchestrator | 2025-06-02 20:26:17 | INFO  | Flavor SCS-16V-32 created 2025-06-02 20:26:17.720554 | orchestrator | 2025-06-02 20:26:17 | INFO  | Flavor SCS-1V-8 created 2025-06-02 20:26:17.839245 | orchestrator | 2025-06-02 20:26:17 | INFO  | Flavor SCS-2V-16 created 2025-06-02 20:26:17.985400 | orchestrator | 2025-06-02 20:26:17 | INFO  | Flavor SCS-4V-32 created 2025-06-02 20:26:18.112803 | orchestrator | 2025-06-02 20:26:18 | INFO  | Flavor SCS-1L-1 created 2025-06-02 20:26:18.250874 | orchestrator | 2025-06-02 20:26:18 | INFO  | Flavor SCS-2V-4-20s created 2025-06-02 20:26:18.394354 | orchestrator | 2025-06-02 20:26:18 | INFO  | Flavor SCS-4V-16-100s created 2025-06-02 20:26:18.539780 | orchestrator | 2025-06-02 20:26:18 | INFO  | Flavor SCS-1V-4-10 created 2025-06-02 20:26:18.672790 | orchestrator | 2025-06-02 20:26:18 | INFO  | Flavor SCS-2V-8-20 created 2025-06-02 20:26:18.805548 | orchestrator | 2025-06-02 20:26:18 | INFO  | Flavor SCS-4V-16-50 created 2025-06-02 20:26:18.954272 | orchestrator | 2025-06-02 20:26:18 | INFO  | Flavor SCS-8V-32-100 created 2025-06-02 20:26:19.097137 | orchestrator | 2025-06-02 20:26:19 | INFO  | Flavor SCS-1V-2-5 created 2025-06-02 20:26:19.218066 | orchestrator | 2025-06-02 20:26:19 | INFO  | Flavor SCS-2V-4-10 created 2025-06-02 20:26:19.358436 | orchestrator | 2025-06-02 20:26:19 | INFO  | Flavor SCS-4V-8-20 created 2025-06-02 20:26:19.489627 | orchestrator | 2025-06-02 20:26:19 | INFO  | Flavor SCS-8V-16-50 created 2025-06-02 20:26:19.638567 | orchestrator | 2025-06-02 20:26:19 | INFO  | Flavor SCS-16V-32-100 created 2025-06-02 20:26:19.773981 | orchestrator | 2025-06-02 20:26:19 | INFO  | Flavor SCS-1V-8-20 created 2025-06-02 20:26:19.917527 | orchestrator | 2025-06-02 20:26:19 | INFO  | Flavor SCS-2V-16-50 created 2025-06-02 20:26:20.047758 | orchestrator | 2025-06-02 20:26:20 | INFO  | Flavor SCS-4V-32-100 created 2025-06-02 20:26:20.196268 | orchestrator | 2025-06-02 20:26:20 | INFO  | Flavor SCS-1L-1-5 created 2025-06-02 20:26:22.339751 | orchestrator | 2025-06-02 20:26:22 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-02 20:26:22.344499 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:26:22.344590 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:26:22.344638 | orchestrator | Registering Redlock._release_script 2025-06-02 20:26:22.403923 | orchestrator | 2025-06-02 20:26:22 | INFO  | Task 747dc73a-43cc-4ff3-991c-9fafebe99b5c (bootstrap-basic) was prepared for execution. 2025-06-02 20:26:22.404027 | orchestrator | 2025-06-02 20:26:22 | INFO  | It takes a moment until task 747dc73a-43cc-4ff3-991c-9fafebe99b5c (bootstrap-basic) has been started and output is visible here. 2025-06-02 20:26:26.227144 | orchestrator | 2025-06-02 20:26:26.228152 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-02 20:26:26.230296 | orchestrator | 2025-06-02 20:26:26.233146 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 20:26:26.233361 | orchestrator | Monday 02 June 2025 20:26:26 +0000 (0:00:00.071) 0:00:00.071 *********** 2025-06-02 20:26:27.974376 | orchestrator | ok: [localhost] 2025-06-02 20:26:27.974986 | orchestrator | 2025-06-02 20:26:27.975739 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-02 20:26:27.976415 | orchestrator | Monday 02 June 2025 20:26:27 +0000 (0:00:01.752) 0:00:01.823 *********** 2025-06-02 20:26:35.434086 | orchestrator | ok: [localhost] 2025-06-02 20:26:35.434574 | orchestrator | 2025-06-02 20:26:35.437078 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-02 20:26:35.438999 | orchestrator | Monday 02 June 2025 20:26:35 +0000 (0:00:07.455) 0:00:09.279 *********** 2025-06-02 20:26:42.853870 | orchestrator | changed: [localhost] 2025-06-02 20:26:42.853996 | orchestrator | 2025-06-02 20:26:42.854445 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-02 20:26:42.855738 | orchestrator | Monday 02 June 2025 20:26:42 +0000 (0:00:07.420) 0:00:16.699 *********** 2025-06-02 20:26:48.440497 | orchestrator | ok: [localhost] 2025-06-02 20:26:48.440596 | orchestrator | 2025-06-02 20:26:48.440743 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-02 20:26:48.441593 | orchestrator | Monday 02 June 2025 20:26:48 +0000 (0:00:05.586) 0:00:22.286 *********** 2025-06-02 20:26:55.753338 | orchestrator | changed: [localhost] 2025-06-02 20:26:55.753854 | orchestrator | 2025-06-02 20:26:55.754475 | orchestrator | TASK [Create public network] *************************************************** 2025-06-02 20:26:55.755536 | orchestrator | Monday 02 June 2025 20:26:55 +0000 (0:00:07.315) 0:00:29.601 *********** 2025-06-02 20:27:00.871172 | orchestrator | changed: [localhost] 2025-06-02 20:27:00.872450 | orchestrator | 2025-06-02 20:27:00.872566 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-02 20:27:00.873947 | orchestrator | Monday 02 June 2025 20:27:00 +0000 (0:00:05.116) 0:00:34.718 *********** 2025-06-02 20:27:06.811531 | orchestrator | changed: [localhost] 2025-06-02 20:27:06.811694 | orchestrator | 2025-06-02 20:27:06.811715 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-02 20:27:06.812969 | orchestrator | Monday 02 June 2025 20:27:06 +0000 (0:00:05.940) 0:00:40.658 *********** 2025-06-02 20:27:10.993995 | orchestrator | changed: [localhost] 2025-06-02 20:27:10.995492 | orchestrator | 2025-06-02 20:27:10.995855 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-02 20:27:10.996855 | orchestrator | Monday 02 June 2025 20:27:10 +0000 (0:00:04.184) 0:00:44.842 *********** 2025-06-02 20:27:14.745387 | orchestrator | changed: [localhost] 2025-06-02 20:27:14.745942 | orchestrator | 2025-06-02 20:27:14.747030 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-02 20:27:14.747939 | orchestrator | Monday 02 June 2025 20:27:14 +0000 (0:00:03.750) 0:00:48.592 *********** 2025-06-02 20:27:18.263775 | orchestrator | ok: [localhost] 2025-06-02 20:27:18.263908 | orchestrator | 2025-06-02 20:27:18.264553 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:27:18.264859 | orchestrator | 2025-06-02 20:27:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 20:27:18.264888 | orchestrator | 2025-06-02 20:27:18 | INFO  | Please wait and do not abort execution. 2025-06-02 20:27:18.265025 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:27:18.265803 | orchestrator | 2025-06-02 20:27:18.267174 | orchestrator | 2025-06-02 20:27:18.267600 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:27:18.268312 | orchestrator | Monday 02 June 2025 20:27:18 +0000 (0:00:03.518) 0:00:52.111 *********** 2025-06-02 20:27:18.268682 | orchestrator | =============================================================================== 2025-06-02 20:27:18.269332 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.46s 2025-06-02 20:27:18.269962 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.42s 2025-06-02 20:27:18.270153 | orchestrator | Create volume type local ------------------------------------------------ 7.32s 2025-06-02 20:27:18.270824 | orchestrator | Set public network to default ------------------------------------------- 5.94s 2025-06-02 20:27:18.271081 | orchestrator | Get volume type local --------------------------------------------------- 5.59s 2025-06-02 20:27:18.271457 | orchestrator | Create public network --------------------------------------------------- 5.12s 2025-06-02 20:27:18.271803 | orchestrator | Create public subnet ---------------------------------------------------- 4.18s 2025-06-02 20:27:18.272116 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.75s 2025-06-02 20:27:18.272854 | orchestrator | Create manager role ----------------------------------------------------- 3.52s 2025-06-02 20:27:18.274729 | orchestrator | Gathering Facts --------------------------------------------------------- 1.75s 2025-06-02 20:27:20.471198 | orchestrator | 2025-06-02 20:27:20 | INFO  | It takes a moment until task 15bc8765-6ec9-439e-b82d-8d26257fb067 (image-manager) has been started and output is visible here. 2025-06-02 20:27:23.949151 | orchestrator | 2025-06-02 20:27:23 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-02 20:27:24.159391 | orchestrator | 2025-06-02 20:27:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-02 20:27:24.160630 | orchestrator | 2025-06-02 20:27:24 | INFO  | Importing image Cirros 0.6.2 2025-06-02 20:27:24.161579 | orchestrator | 2025-06-02 20:27:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 20:27:26.087884 | orchestrator | 2025-06-02 20:27:26 | INFO  | Waiting for image to leave queued state... 2025-06-02 20:27:28.132392 | orchestrator | 2025-06-02 20:27:28 | INFO  | Waiting for import to complete... 2025-06-02 20:27:38.469543 | orchestrator | 2025-06-02 20:27:38 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-02 20:27:38.705454 | orchestrator | 2025-06-02 20:27:38 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-02 20:27:38.706292 | orchestrator | 2025-06-02 20:27:38 | INFO  | Setting internal_version = 0.6.2 2025-06-02 20:27:38.707686 | orchestrator | 2025-06-02 20:27:38 | INFO  | Setting image_original_user = cirros 2025-06-02 20:27:38.710259 | orchestrator | 2025-06-02 20:27:38 | INFO  | Adding tag os:cirros 2025-06-02 20:27:39.168641 | orchestrator | 2025-06-02 20:27:39 | INFO  | Setting property architecture: x86_64 2025-06-02 20:27:39.407394 | orchestrator | 2025-06-02 20:27:39 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 20:27:39.728251 | orchestrator | 2025-06-02 20:27:39 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 20:27:39.963212 | orchestrator | 2025-06-02 20:27:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 20:27:40.162629 | orchestrator | 2025-06-02 20:27:40 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 20:27:40.393430 | orchestrator | 2025-06-02 20:27:40 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 20:27:40.591230 | orchestrator | 2025-06-02 20:27:40 | INFO  | Setting property os_distro: cirros 2025-06-02 20:27:40.812433 | orchestrator | 2025-06-02 20:27:40 | INFO  | Setting property replace_frequency: never 2025-06-02 20:27:41.041368 | orchestrator | 2025-06-02 20:27:41 | INFO  | Setting property uuid_validity: none 2025-06-02 20:27:41.274076 | orchestrator | 2025-06-02 20:27:41 | INFO  | Setting property provided_until: none 2025-06-02 20:27:41.481262 | orchestrator | 2025-06-02 20:27:41 | INFO  | Setting property image_description: Cirros 2025-06-02 20:27:41.731290 | orchestrator | 2025-06-02 20:27:41 | INFO  | Setting property image_name: Cirros 2025-06-02 20:27:41.957470 | orchestrator | 2025-06-02 20:27:41 | INFO  | Setting property internal_version: 0.6.2 2025-06-02 20:27:42.184340 | orchestrator | 2025-06-02 20:27:42 | INFO  | Setting property image_original_user: cirros 2025-06-02 20:27:42.450338 | orchestrator | 2025-06-02 20:27:42 | INFO  | Setting property os_version: 0.6.2 2025-06-02 20:27:42.649610 | orchestrator | 2025-06-02 20:27:42 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 20:27:42.859174 | orchestrator | 2025-06-02 20:27:42 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-02 20:27:43.114691 | orchestrator | 2025-06-02 20:27:43 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-02 20:27:43.114817 | orchestrator | 2025-06-02 20:27:43 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-02 20:27:43.116052 | orchestrator | 2025-06-02 20:27:43 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-02 20:27:43.305379 | orchestrator | 2025-06-02 20:27:43 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-02 20:27:43.499987 | orchestrator | 2025-06-02 20:27:43 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-02 20:27:43.500989 | orchestrator | 2025-06-02 20:27:43 | INFO  | Importing image Cirros 0.6.3 2025-06-02 20:27:43.501466 | orchestrator | 2025-06-02 20:27:43 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 20:27:43.936636 | orchestrator | 2025-06-02 20:27:43 | INFO  | Waiting for image to leave queued state... 2025-06-02 20:27:45.981739 | orchestrator | 2025-06-02 20:27:45 | INFO  | Waiting for import to complete... 2025-06-02 20:27:56.125287 | orchestrator | 2025-06-02 20:27:56 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-02 20:27:56.397136 | orchestrator | 2025-06-02 20:27:56 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-02 20:27:56.397826 | orchestrator | 2025-06-02 20:27:56 | INFO  | Setting internal_version = 0.6.3 2025-06-02 20:27:56.399023 | orchestrator | 2025-06-02 20:27:56 | INFO  | Setting image_original_user = cirros 2025-06-02 20:27:56.399763 | orchestrator | 2025-06-02 20:27:56 | INFO  | Adding tag os:cirros 2025-06-02 20:27:56.633049 | orchestrator | 2025-06-02 20:27:56 | INFO  | Setting property architecture: x86_64 2025-06-02 20:27:56.913125 | orchestrator | 2025-06-02 20:27:56 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 20:27:57.174080 | orchestrator | 2025-06-02 20:27:57 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 20:27:57.401030 | orchestrator | 2025-06-02 20:27:57 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 20:27:57.606363 | orchestrator | 2025-06-02 20:27:57 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 20:27:57.820343 | orchestrator | 2025-06-02 20:27:57 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 20:27:58.082751 | orchestrator | 2025-06-02 20:27:58 | INFO  | Setting property os_distro: cirros 2025-06-02 20:27:58.275334 | orchestrator | 2025-06-02 20:27:58 | INFO  | Setting property replace_frequency: never 2025-06-02 20:27:58.526639 | orchestrator | 2025-06-02 20:27:58 | INFO  | Setting property uuid_validity: none 2025-06-02 20:27:58.775654 | orchestrator | 2025-06-02 20:27:58 | INFO  | Setting property provided_until: none 2025-06-02 20:27:58.957078 | orchestrator | 2025-06-02 20:27:58 | INFO  | Setting property image_description: Cirros 2025-06-02 20:27:59.175825 | orchestrator | 2025-06-02 20:27:59 | INFO  | Setting property image_name: Cirros 2025-06-02 20:27:59.430689 | orchestrator | 2025-06-02 20:27:59 | INFO  | Setting property internal_version: 0.6.3 2025-06-02 20:27:59.644367 | orchestrator | 2025-06-02 20:27:59 | INFO  | Setting property image_original_user: cirros 2025-06-02 20:27:59.871973 | orchestrator | 2025-06-02 20:27:59 | INFO  | Setting property os_version: 0.6.3 2025-06-02 20:28:00.113503 | orchestrator | 2025-06-02 20:28:00 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 20:28:00.345844 | orchestrator | 2025-06-02 20:28:00 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-02 20:28:00.585496 | orchestrator | 2025-06-02 20:28:00 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-02 20:28:00.586894 | orchestrator | 2025-06-02 20:28:00 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-02 20:28:00.587990 | orchestrator | 2025-06-02 20:28:00 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-02 20:28:01.517630 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-02 20:28:03.370141 | orchestrator | 2025-06-02 20:28:03 | INFO  | date: 2025-06-02 2025-06-02 20:28:03.370261 | orchestrator | 2025-06-02 20:28:03 | INFO  | image: octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 20:28:03.370280 | orchestrator | 2025-06-02 20:28:03 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 20:28:03.370318 | orchestrator | 2025-06-02 20:28:03 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2.CHECKSUM 2025-06-02 20:28:03.396288 | orchestrator | 2025-06-02 20:28:03 | INFO  | checksum: 4244ae669e0302e4de8dd880cdee4c27c232e9d393dd18f3521b5d0e7c284b7c 2025-06-02 20:28:03.468007 | orchestrator | 2025-06-02 20:28:03 | INFO  | It takes a moment until task 00c1e9c2-c6f3-4192-8ec5-394d9fa3b124 (image-manager) has been started and output is visible here. 2025-06-02 20:28:03.697334 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-02 20:28:03.697521 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-02 20:28:05.334141 | orchestrator | 2025-06-02 20:28:05 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 20:28:05.345932 | orchestrator | 2025-06-02 20:28:05 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2: 200 2025-06-02 20:28:05.346881 | orchestrator | 2025-06-02 20:28:05 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-02 2025-06-02 20:28:05.347584 | orchestrator | 2025-06-02 20:28:05 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 20:28:06.733238 | orchestrator | 2025-06-02 20:28:06 | INFO  | Waiting for image to leave queued state... 2025-06-02 20:28:08.775971 | orchestrator | 2025-06-02 20:28:08 | INFO  | Waiting for import to complete... 2025-06-02 20:28:18.877938 | orchestrator | 2025-06-02 20:28:18 | INFO  | Waiting for import to complete... 2025-06-02 20:28:28.979010 | orchestrator | 2025-06-02 20:28:28 | INFO  | Waiting for import to complete... 2025-06-02 20:28:39.081284 | orchestrator | 2025-06-02 20:28:39 | INFO  | Waiting for import to complete... 2025-06-02 20:28:49.179080 | orchestrator | 2025-06-02 20:28:49 | INFO  | Waiting for import to complete... 2025-06-02 20:28:59.315786 | orchestrator | 2025-06-02 20:28:59 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-02' successfully completed, reloading images 2025-06-02 20:28:59.927130 | orchestrator | 2025-06-02 20:28:59 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 20:28:59.927220 | orchestrator | 2025-06-02 20:28:59 | INFO  | Setting internal_version = 2025-06-02 2025-06-02 20:28:59.927940 | orchestrator | 2025-06-02 20:28:59 | INFO  | Setting image_original_user = ubuntu 2025-06-02 20:28:59.929253 | orchestrator | 2025-06-02 20:28:59 | INFO  | Adding tag amphora 2025-06-02 20:29:00.204395 | orchestrator | 2025-06-02 20:29:00 | INFO  | Adding tag os:ubuntu 2025-06-02 20:29:00.429915 | orchestrator | 2025-06-02 20:29:00 | INFO  | Setting property architecture: x86_64 2025-06-02 20:29:00.608542 | orchestrator | 2025-06-02 20:29:00 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 20:29:00.819109 | orchestrator | 2025-06-02 20:29:00 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 20:29:01.036351 | orchestrator | 2025-06-02 20:29:01 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 20:29:01.203701 | orchestrator | 2025-06-02 20:29:01 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 20:29:01.432820 | orchestrator | 2025-06-02 20:29:01 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 20:29:01.641306 | orchestrator | 2025-06-02 20:29:01 | INFO  | Setting property os_distro: ubuntu 2025-06-02 20:29:01.901963 | orchestrator | 2025-06-02 20:29:01 | INFO  | Setting property replace_frequency: quarterly 2025-06-02 20:29:02.159043 | orchestrator | 2025-06-02 20:29:02 | INFO  | Setting property uuid_validity: last-1 2025-06-02 20:29:02.399985 | orchestrator | 2025-06-02 20:29:02 | INFO  | Setting property provided_until: none 2025-06-02 20:29:02.613996 | orchestrator | 2025-06-02 20:29:02 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-02 20:29:02.847188 | orchestrator | 2025-06-02 20:29:02 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-02 20:29:03.071552 | orchestrator | 2025-06-02 20:29:03 | INFO  | Setting property internal_version: 2025-06-02 2025-06-02 20:29:03.269039 | orchestrator | 2025-06-02 20:29:03 | INFO  | Setting property image_original_user: ubuntu 2025-06-02 20:29:03.502255 | orchestrator | 2025-06-02 20:29:03 | INFO  | Setting property os_version: 2025-06-02 2025-06-02 20:29:03.726775 | orchestrator | 2025-06-02 20:29:03 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 20:29:03.943183 | orchestrator | 2025-06-02 20:29:03 | INFO  | Setting property image_build_date: 2025-06-02 2025-06-02 20:29:04.179243 | orchestrator | 2025-06-02 20:29:04 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 20:29:04.179738 | orchestrator | 2025-06-02 20:29:04 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 20:29:04.375466 | orchestrator | 2025-06-02 20:29:04 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-02 20:29:04.376686 | orchestrator | 2025-06-02 20:29:04 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-02 20:29:04.377679 | orchestrator | 2025-06-02 20:29:04 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-02 20:29:04.378694 | orchestrator | 2025-06-02 20:29:04 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-02 20:29:04.907832 | orchestrator | ok: Runtime: 0:02:56.194908 2025-06-02 20:29:04.922449 | 2025-06-02 20:29:04.922559 | TASK [Run checks] 2025-06-02 20:29:05.632345 | orchestrator | + set -e 2025-06-02 20:29:05.632576 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 20:29:05.632603 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 20:29:05.632624 | orchestrator | ++ INTERACTIVE=false 2025-06-02 20:29:05.632638 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 20:29:05.632651 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 20:29:05.632666 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 20:29:05.633735 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 20:29:05.640194 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 20:29:05.640278 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 20:29:05.640923 | orchestrator | 2025-06-02 20:29:05.640943 | orchestrator | # CHECK 2025-06-02 20:29:05.640954 | orchestrator | 2025-06-02 20:29:05.640964 | orchestrator | + echo 2025-06-02 20:29:05.640985 | orchestrator | + echo '# CHECK' 2025-06-02 20:29:05.640995 | orchestrator | + echo 2025-06-02 20:29:05.641009 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 20:29:05.642076 | orchestrator | ++ semver latest 5.0.0 2025-06-02 20:29:05.701962 | orchestrator | 2025-06-02 20:29:05.702094 | orchestrator | ## Containers @ testbed-manager 2025-06-02 20:29:05.702108 | orchestrator | 2025-06-02 20:29:05.702118 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-02 20:29:05.702127 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 20:29:05.702134 | orchestrator | + echo 2025-06-02 20:29:05.702142 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-02 20:29:05.702151 | orchestrator | + echo 2025-06-02 20:29:05.702158 | orchestrator | + osism container testbed-manager ps 2025-06-02 20:29:07.713453 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 20:29:07.713672 | orchestrator | adf402b8be14 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-06-02 20:29:07.713730 | orchestrator | ae0e336d2f73 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-06-02 20:29:07.713751 | orchestrator | 367a4b172d59 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 20:29:07.713763 | orchestrator | 5d3ec96965cf registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-02 20:29:07.713775 | orchestrator | a23f69d70db8 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-06-02 20:29:07.713792 | orchestrator | 11be41180bb1 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2025-06-02 20:29:07.713804 | orchestrator | c9eb01f3ec50 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-02 20:29:07.713816 | orchestrator | c346c2fa55fd registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-02 20:29:07.713827 | orchestrator | f1fa3a56552d registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-02 20:29:07.713864 | orchestrator | 4b89bcf6b1ea phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2025-06-02 20:29:07.713876 | orchestrator | 5ec283ff798e registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 30 minutes openstackclient 2025-06-02 20:29:07.713888 | orchestrator | 478c33c9b90a registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 30 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2025-06-02 20:29:07.713899 | orchestrator | 601746ae430b registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 37 minutes ago Up 37 minutes (healthy) osism-ansible 2025-06-02 20:29:07.713910 | orchestrator | b7622894109e registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 51 minutes ago Up 50 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-02 20:29:07.713927 | orchestrator | 4907660aa463 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 54 minutes ago Up 36 minutes (healthy) manager-inventory_reconciler-1 2025-06-02 20:29:07.713960 | orchestrator | 6af70154d489 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 54 minutes ago Up 37 minutes (healthy) kolla-ansible 2025-06-02 20:29:07.713973 | orchestrator | 13d12681867d registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 54 minutes ago Up 37 minutes (healthy) osism-kubernetes 2025-06-02 20:29:07.713984 | orchestrator | 1cee33199e77 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 54 minutes ago Up 37 minutes (healthy) ceph-ansible 2025-06-02 20:29:07.713995 | orchestrator | 553a11964660 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 54 minutes ago Up 37 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-02 20:29:07.714006 | orchestrator | 2358f53d3224 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 37 minutes (healthy) manager-flower-1 2025-06-02 20:29:07.714103 | orchestrator | f4a15f603f03 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 54 minutes ago Up 37 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-02 20:29:07.714120 | orchestrator | 6e0279850857 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 37 minutes (healthy) manager-openstack-1 2025-06-02 20:29:07.714131 | orchestrator | 1f823ca20a17 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 37 minutes (healthy) manager-listener-1 2025-06-02 20:29:07.714142 | orchestrator | 23427d77f92e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 37 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-02 20:29:07.714163 | orchestrator | c77373051614 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 54 minutes ago Up 37 minutes (healthy) 6379/tcp manager-redis-1 2025-06-02 20:29:07.714175 | orchestrator | 36c16e99494c registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 37 minutes (healthy) manager-beat-1 2025-06-02 20:29:07.714186 | orchestrator | dfafcd9dd066 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 54 minutes ago Up 37 minutes (healthy) osismclient 2025-06-02 20:29:07.714196 | orchestrator | f8fc136ce909 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-02 20:29:07.941365 | orchestrator | 2025-06-02 20:29:07.941528 | orchestrator | ## Images @ testbed-manager 2025-06-02 20:29:07.941558 | orchestrator | 2025-06-02 20:29:07.941578 | orchestrator | + echo 2025-06-02 20:29:07.941597 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-02 20:29:07.941618 | orchestrator | + echo 2025-06-02 20:29:07.941636 | orchestrator | + osism container testbed-manager images 2025-06-02 20:29:09.917033 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 20:29:09.917151 | orchestrator | registry.osism.tech/osism/osism-ansible latest f953a9ab3915 48 minutes ago 577MB 2025-06-02 20:29:09.917168 | orchestrator | registry.osism.tech/osism/osism-ansible 01d87dbdb7ff 2 hours ago 577MB 2025-06-02 20:29:09.917180 | orchestrator | registry.osism.tech/osism/osism latest ac1f7959a33a 4 hours ago 297MB 2025-06-02 20:29:09.917191 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 8f1cf06d366b 7 hours ago 574MB 2025-06-02 20:29:09.917202 | orchestrator | registry.osism.tech/osism/homer v25.05.2 e73e0506845d 17 hours ago 11.5MB 2025-06-02 20:29:09.917256 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 86ee4afc8387 17 hours ago 225MB 2025-06-02 20:29:09.917268 | orchestrator | registry.osism.tech/osism/cephclient reef 3d7d8b8bbba7 17 hours ago 454MB 2025-06-02 20:29:09.917279 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 d83e4c60464a 19 hours ago 629MB 2025-06-02 20:29:09.917290 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5b108bf8b06 19 hours ago 319MB 2025-06-02 20:29:09.917322 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d96ad4a06177 19 hours ago 747MB 2025-06-02 20:29:09.917333 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 98f0ac7b228f 19 hours ago 457MB 2025-06-02 20:29:09.917344 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b4411222e57e 19 hours ago 411MB 2025-06-02 20:29:09.917355 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5134a96e4dfe 19 hours ago 359MB 2025-06-02 20:29:09.917366 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 058fdfb821be 19 hours ago 361MB 2025-06-02 20:29:09.917376 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 fef9d4ae652b 19 hours ago 892MB 2025-06-02 20:29:09.917515 | orchestrator | registry.osism.tech/osism/ceph-ansible reef b20110f9400d 20 hours ago 538MB 2025-06-02 20:29:09.917534 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 95f78bc350f5 20 hours ago 1.21GB 2025-06-02 20:29:09.917568 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 77eaadf2782f 20 hours ago 310MB 2025-06-02 20:29:09.917579 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 4 days ago 41.4MB 2025-06-02 20:29:09.917590 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 6 days ago 224MB 2025-06-02 20:29:09.917601 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-06-02 20:29:09.917612 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-02 20:29:09.917623 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-02 20:29:09.917633 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-06-02 20:29:10.158359 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 20:29:10.159457 | orchestrator | ++ semver latest 5.0.0 2025-06-02 20:29:10.213243 | orchestrator | 2025-06-02 20:29:10.213332 | orchestrator | ## Containers @ testbed-node-0 2025-06-02 20:29:10.213343 | orchestrator | 2025-06-02 20:29:10.213351 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-02 20:29:10.213357 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 20:29:10.213364 | orchestrator | + echo 2025-06-02 20:29:10.213371 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-02 20:29:10.213380 | orchestrator | + echo 2025-06-02 20:29:10.213387 | orchestrator | + osism container testbed-node-0 ps 2025-06-02 20:29:12.346295 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 20:29:12.346408 | orchestrator | 6b33d56f16e4 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 20:29:12.346424 | orchestrator | 3e96b71c752b registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 20:29:12.346437 | orchestrator | cbd4b1aa8c04 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 20:29:12.346447 | orchestrator | d22690f6add6 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 20:29:12.346457 | orchestrator | 40d7b8bce5ca registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-02 20:29:12.346503 | orchestrator | 6c632b175c6d registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 20:29:12.346524 | orchestrator | 8f37d99029ac registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-02 20:29:12.346542 | orchestrator | 22978df70be7 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-02 20:29:12.346559 | orchestrator | deff2a335eb3 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 20:29:12.346575 | orchestrator | e91666b90938 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 20:29:12.346587 | orchestrator | 232a4e64a401 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 20:29:12.346634 | orchestrator | c7222705f43a registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 20:29:12.346645 | orchestrator | 180196c78976 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 20:29:12.346655 | orchestrator | 92b98083f442 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-02 20:29:12.346665 | orchestrator | ca068d82882d registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 20:29:12.346674 | orchestrator | 3464141566ca registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-02 20:29:12.346684 | orchestrator | c7c3f43212e4 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 20:29:12.346693 | orchestrator | d51fe40e28eb registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-02 20:29:12.346703 | orchestrator | 8f5dfdac9fca registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 20:29:12.346713 | orchestrator | 4dd8b05bbd55 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-02 20:29:12.346722 | orchestrator | d94032760a3d registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-02 20:29:12.346749 | orchestrator | 770c163f1445 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-02 20:29:12.346760 | orchestrator | fb4d8461e914 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 20:29:12.346771 | orchestrator | 7d36163f18a8 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 20:29:12.346786 | orchestrator | 38469fbcf4fa registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-02 20:29:12.346808 | orchestrator | 090f2528513c registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-02 20:29:12.346833 | orchestrator | 164ee51a0952 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 20:29:12.346851 | orchestrator | 3ada9e24becf registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 20:29:12.346867 | orchestrator | adaa745e8d95 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 20:29:12.346882 | orchestrator | 2d70bf1dac53 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-02 20:29:12.346897 | orchestrator | dde5f3d5b7c5 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-02 20:29:12.346925 | orchestrator | fac05b203dac registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-06-02 20:29:12.346950 | orchestrator | 551955c5531b registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-06-02 20:29:12.346967 | orchestrator | e20283989067 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-06-02 20:29:12.346983 | orchestrator | 39d4378729ae registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-02 20:29:12.346998 | orchestrator | 4023d101956d registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-02 20:29:12.347008 | orchestrator | 80e99b53054a registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-06-02 20:29:12.347018 | orchestrator | 0d64a46f8fdc registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-02 20:29:12.347028 | orchestrator | f1cb6f551178 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-06-02 20:29:12.347037 | orchestrator | c77100894632 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-02 20:29:12.347047 | orchestrator | 1da8f6902d40 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-06-02 20:29:12.347057 | orchestrator | 280a4b440bad registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-02 20:29:12.347066 | orchestrator | fd4ea76a1f47 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-02 20:29:12.347076 | orchestrator | 5882f9710fab registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-06-02 20:29:12.347102 | orchestrator | 8b4160bf9a9f registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-06-02 20:29:12.347117 | orchestrator | 5321622c942d registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-06-02 20:29:12.347126 | orchestrator | ef1767341e74 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-06-02 20:29:12.347136 | orchestrator | f56b4da6ac4a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2025-06-02 20:29:12.347146 | orchestrator | 5969544bf273 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-02 20:29:12.347155 | orchestrator | 42be2a5f077c registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-02 20:29:12.347165 | orchestrator | 3636a032e04e registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-06-02 20:29:12.347180 | orchestrator | 01552f790a67 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-06-02 20:29:12.347190 | orchestrator | ba5b4da8a21f registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-06-02 20:29:12.347200 | orchestrator | dd8de303b191 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-06-02 20:29:12.347209 | orchestrator | 6db44ee6e9e7 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-02 20:29:12.347219 | orchestrator | 85c97a5fba77 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-06-02 20:29:12.347228 | orchestrator | fd4d744bdd5a registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-02 20:29:12.599211 | orchestrator | 2025-06-02 20:29:12.599320 | orchestrator | ## Images @ testbed-node-0 2025-06-02 20:29:12.599337 | orchestrator | 2025-06-02 20:29:12.599349 | orchestrator | + echo 2025-06-02 20:29:12.599362 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-02 20:29:12.599374 | orchestrator | + echo 2025-06-02 20:29:12.599385 | orchestrator | + osism container testbed-node-0 images 2025-06-02 20:29:14.672176 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 20:29:14.672289 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 271b9d293e19 17 hours ago 1.27GB 2025-06-02 20:29:14.672305 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 392808c41677 19 hours ago 319MB 2025-06-02 20:29:14.672319 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 d83e4c60464a 19 hours ago 629MB 2025-06-02 20:29:14.672331 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 83dfa36b0b09 19 hours ago 376MB 2025-06-02 20:29:14.672342 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5b108bf8b06 19 hours ago 319MB 2025-06-02 20:29:14.672353 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 9534d2527bd9 19 hours ago 327MB 2025-06-02 20:29:14.672364 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 88f1dfbac042 19 hours ago 1.59GB 2025-06-02 20:29:14.672375 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 0f911db240a6 19 hours ago 1.01GB 2025-06-02 20:29:14.672386 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 307c7b2e9629 19 hours ago 1.55GB 2025-06-02 20:29:14.672396 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5b770fdbd519 19 hours ago 330MB 2025-06-02 20:29:14.672408 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d0f7c25d3497 19 hours ago 419MB 2025-06-02 20:29:14.672419 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d96ad4a06177 19 hours ago 747MB 2025-06-02 20:29:14.672430 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4b29449821be 19 hours ago 326MB 2025-06-02 20:29:14.672441 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a4f9468e38ea 19 hours ago 325MB 2025-06-02 20:29:14.672451 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 75af3084c3d1 19 hours ago 352MB 2025-06-02 20:29:14.672462 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b4411222e57e 19 hours ago 411MB 2025-06-02 20:29:14.672530 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 db5ce49c89cc 19 hours ago 345MB 2025-06-02 20:29:14.672568 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5134a96e4dfe 19 hours ago 359MB 2025-06-02 20:29:14.672579 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 03e0f3198b34 19 hours ago 354MB 2025-06-02 20:29:14.672590 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 8dfe63d220a5 19 hours ago 362MB 2025-06-02 20:29:14.672601 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 34548ea593f0 19 hours ago 362MB 2025-06-02 20:29:14.672611 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29ac703ff67c 19 hours ago 591MB 2025-06-02 20:29:14.672622 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 fe51ac78c8f1 19 hours ago 1.21GB 2025-06-02 20:29:14.672632 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 c4655637af6e 19 hours ago 947MB 2025-06-02 20:29:14.672643 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 501bf0c10100 19 hours ago 948MB 2025-06-02 20:29:14.672654 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 bff812ef8262 19 hours ago 948MB 2025-06-02 20:29:14.672664 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 e6e013a1a722 19 hours ago 947MB 2025-06-02 20:29:14.672675 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 23e5ad899301 19 hours ago 1.41GB 2025-06-02 20:29:14.672701 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 45b363b7482a 19 hours ago 1.41GB 2025-06-02 20:29:14.672714 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 760164fe4759 19 hours ago 1.29GB 2025-06-02 20:29:14.672727 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 f5741b323fe9 19 hours ago 1.29GB 2025-06-02 20:29:14.672740 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 ef9c983c3ed3 19 hours ago 1.3GB 2025-06-02 20:29:14.672752 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 35396146c866 19 hours ago 1.42GB 2025-06-02 20:29:14.672765 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 70795d3e49ef 19 hours ago 1.15GB 2025-06-02 20:29:14.672777 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 de33a20e612e 19 hours ago 1.31GB 2025-06-02 20:29:14.672789 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 462af32e366a 19 hours ago 1.2GB 2025-06-02 20:29:14.672819 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 21905100e3ed 19 hours ago 1.06GB 2025-06-02 20:29:14.673013 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9c686edf4034 19 hours ago 1.06GB 2025-06-02 20:29:14.673115 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 e5000fc07327 19 hours ago 1.06GB 2025-06-02 20:29:14.673147 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 8a6a3d63670d 19 hours ago 1.04GB 2025-06-02 20:29:14.673160 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 d8bc8850fca0 19 hours ago 1.04GB 2025-06-02 20:29:14.673171 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 4e7db9d8828a 19 hours ago 1.04GB 2025-06-02 20:29:14.673182 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 6382990ff4a0 19 hours ago 1.04GB 2025-06-02 20:29:14.673192 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 da1a6531a58f 19 hours ago 1.11GB 2025-06-02 20:29:14.673202 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 89a10b4f8d41 19 hours ago 1.12GB 2025-06-02 20:29:14.673213 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 db5d29469dee 19 hours ago 1.1GB 2025-06-02 20:29:14.673250 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 47facbd328df 19 hours ago 1.1GB 2025-06-02 20:29:14.673261 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a89f287066ef 19 hours ago 1.12GB 2025-06-02 20:29:14.673271 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 1f4bea213a07 19 hours ago 1.1GB 2025-06-02 20:29:14.673282 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 97ff50a4c378 19 hours ago 1.12GB 2025-06-02 20:29:14.673292 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 aed6aac6097b 19 hours ago 1.04GB 2025-06-02 20:29:14.673303 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 def5173eaa7a 19 hours ago 1.04GB 2025-06-02 20:29:14.673313 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 c4ed2f5a2192 19 hours ago 1.11GB 2025-06-02 20:29:14.673324 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 ea224ddfbd63 19 hours ago 1.11GB 2025-06-02 20:29:14.673334 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 68b4a4b40b7c 19 hours ago 1.13GB 2025-06-02 20:29:14.673345 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8f7230e2e54a 19 hours ago 1.04GB 2025-06-02 20:29:14.673355 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3a64d65ac616 19 hours ago 1.05GB 2025-06-02 20:29:14.673385 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c3e9f7a9a34d 19 hours ago 1.05GB 2025-06-02 20:29:14.673396 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 99480384bf9d 19 hours ago 1.06GB 2025-06-02 20:29:14.673407 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 16d05b3fd708 19 hours ago 1.05GB 2025-06-02 20:29:14.673418 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5935e336ac71 19 hours ago 1.06GB 2025-06-02 20:29:14.673428 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 ad58c3a011c5 19 hours ago 1.05GB 2025-06-02 20:29:14.673440 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 321a68afc007 19 hours ago 1.25GB 2025-06-02 20:29:14.894077 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 20:29:14.894921 | orchestrator | ++ semver latest 5.0.0 2025-06-02 20:29:14.954587 | orchestrator | 2025-06-02 20:29:14.954687 | orchestrator | ## Containers @ testbed-node-1 2025-06-02 20:29:14.954704 | orchestrator | 2025-06-02 20:29:14.954716 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-02 20:29:14.954728 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 20:29:14.954739 | orchestrator | + echo 2025-06-02 20:29:14.954750 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-02 20:29:14.954762 | orchestrator | + echo 2025-06-02 20:29:14.954774 | orchestrator | + osism container testbed-node-1 ps 2025-06-02 20:29:17.078614 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 20:29:17.078695 | orchestrator | d35dad8f60d0 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 20:29:17.078703 | orchestrator | 72b41c428e20 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 20:29:17.078708 | orchestrator | 832aeee297c5 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 20:29:17.078712 | orchestrator | 90271324f128 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 20:29:17.078717 | orchestrator | 714e44176ed7 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-02 20:29:17.078736 | orchestrator | 2b7539406a75 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-02 20:29:17.078741 | orchestrator | 39f0f6d7b2e9 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 20:29:17.078744 | orchestrator | 14b3ebc0a92d registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-02 20:29:17.078748 | orchestrator | c7e0d03c3537 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 20:29:17.078752 | orchestrator | f614ef1b76c5 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 20:29:17.078756 | orchestrator | a255250cdb61 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 20:29:17.078759 | orchestrator | 2ee16d8d229e registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 20:29:17.078763 | orchestrator | e23e53ca5602 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 20:29:17.078778 | orchestrator | 89bbf9d3f59b registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-02 20:29:17.078786 | orchestrator | 377b33990d66 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 20:29:17.078790 | orchestrator | aeff789486e4 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-02 20:29:17.078794 | orchestrator | 41d67956ae0b registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-02 20:29:17.078798 | orchestrator | 399c1c27c92f registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 20:29:17.078801 | orchestrator | f34410cabcae registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 20:29:17.078805 | orchestrator | 1859823ead47 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-02 20:29:17.078809 | orchestrator | 62e4ddd4fed2 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-02 20:29:17.078823 | orchestrator | 9eb4698fd6cf registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 20:29:17.078827 | orchestrator | 6278ae6c01c9 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-06-02 20:29:17.078831 | orchestrator | 17e91a51655c registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 20:29:17.078835 | orchestrator | 4d2e30654aa7 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-02 20:29:17.078842 | orchestrator | ac390c484c32 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes (healthy) cinder_api 2025-06-02 20:29:17.078845 | orchestrator | d7151c231485 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-02 20:29:17.078850 | orchestrator | 51c7c875cf3f registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 20:29:17.078854 | orchestrator | 6a6ec7f045b8 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 20:29:17.078857 | orchestrator | f4c87295f0c9 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-02 20:29:17.078861 | orchestrator | d109405102c9 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-02 20:29:17.078865 | orchestrator | 26b291298684 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-06-02 20:29:17.078869 | orchestrator | 1d6cdef7922a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-06-02 20:29:17.078872 | orchestrator | 0cb61635f50b registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-06-02 20:29:17.078876 | orchestrator | 426678878b08 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-06-02 20:29:17.078883 | orchestrator | 1c2f57067880 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-02 20:29:17.078887 | orchestrator | d8b94bf1f08a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-06-02 20:29:17.078890 | orchestrator | 753d6087c8ce registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-02 20:29:17.078894 | orchestrator | da49b9d65382 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-06-02 20:29:17.078898 | orchestrator | 25eb26a0e4d3 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-02 20:29:17.078901 | orchestrator | d02fab01d33f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-06-02 20:29:17.078905 | orchestrator | c3b045d1bfa6 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-02 20:29:17.078909 | orchestrator | 28a73ccfaa41 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-02 20:29:17.078913 | orchestrator | 5de66738415e registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-06-02 20:29:17.078922 | orchestrator | 02967f60d17d registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-06-02 20:29:17.078926 | orchestrator | 5fafc6e6f01c registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-06-02 20:29:17.078930 | orchestrator | 287c45ed4cab registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-06-02 20:29:17.078933 | orchestrator | c89b0e90a72e registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-06-02 20:29:17.078937 | orchestrator | 801821aa3d50 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2025-06-02 20:29:17.078941 | orchestrator | 32611eb13908 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-02 20:29:17.078945 | orchestrator | 435915b1fcd8 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-06-02 20:29:17.078948 | orchestrator | b1f8b235d81c registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-06-02 20:29:17.078952 | orchestrator | 05b1485f62e1 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-06-02 20:29:17.078956 | orchestrator | f067310f90b1 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-06-02 20:29:17.078959 | orchestrator | 0e7c5916a82e registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-02 20:29:17.078963 | orchestrator | c5204004088e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-06-02 20:29:17.078967 | orchestrator | 4d0ecc1f0d8e registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-02 20:29:17.318962 | orchestrator | 2025-06-02 20:29:17.319058 | orchestrator | ## Images @ testbed-node-1 2025-06-02 20:29:17.319073 | orchestrator | 2025-06-02 20:29:17.319086 | orchestrator | + echo 2025-06-02 20:29:17.319098 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-02 20:29:17.319111 | orchestrator | + echo 2025-06-02 20:29:17.319122 | orchestrator | + osism container testbed-node-1 images 2025-06-02 20:29:19.386371 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 20:29:19.386563 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 271b9d293e19 17 hours ago 1.27GB 2025-06-02 20:29:19.386593 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 392808c41677 19 hours ago 319MB 2025-06-02 20:29:19.386613 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 d83e4c60464a 19 hours ago 629MB 2025-06-02 20:29:19.386631 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 83dfa36b0b09 19 hours ago 376MB 2025-06-02 20:29:19.386671 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5b108bf8b06 19 hours ago 319MB 2025-06-02 20:29:19.386690 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 9534d2527bd9 19 hours ago 327MB 2025-06-02 20:29:19.386708 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 0f911db240a6 19 hours ago 1.01GB 2025-06-02 20:29:19.386758 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 88f1dfbac042 19 hours ago 1.59GB 2025-06-02 20:29:19.386778 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 307c7b2e9629 19 hours ago 1.55GB 2025-06-02 20:29:19.386796 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5b770fdbd519 19 hours ago 330MB 2025-06-02 20:29:19.386814 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d0f7c25d3497 19 hours ago 419MB 2025-06-02 20:29:19.386833 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d96ad4a06177 19 hours ago 747MB 2025-06-02 20:29:19.386850 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4b29449821be 19 hours ago 326MB 2025-06-02 20:29:19.386868 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a4f9468e38ea 19 hours ago 325MB 2025-06-02 20:29:19.386888 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 75af3084c3d1 19 hours ago 352MB 2025-06-02 20:29:19.386906 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b4411222e57e 19 hours ago 411MB 2025-06-02 20:29:19.386926 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 db5ce49c89cc 19 hours ago 345MB 2025-06-02 20:29:19.386945 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 03e0f3198b34 19 hours ago 354MB 2025-06-02 20:29:19.386963 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5134a96e4dfe 19 hours ago 359MB 2025-06-02 20:29:19.386981 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 8dfe63d220a5 19 hours ago 362MB 2025-06-02 20:29:19.387000 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 34548ea593f0 19 hours ago 362MB 2025-06-02 20:29:19.387019 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29ac703ff67c 19 hours ago 591MB 2025-06-02 20:29:19.387038 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 fe51ac78c8f1 19 hours ago 1.21GB 2025-06-02 20:29:19.387056 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 c4655637af6e 19 hours ago 947MB 2025-06-02 20:29:19.387075 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 501bf0c10100 19 hours ago 948MB 2025-06-02 20:29:19.387093 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 bff812ef8262 19 hours ago 948MB 2025-06-02 20:29:19.387111 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 e6e013a1a722 19 hours ago 947MB 2025-06-02 20:29:19.387129 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 23e5ad899301 19 hours ago 1.41GB 2025-06-02 20:29:19.387147 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 45b363b7482a 19 hours ago 1.41GB 2025-06-02 20:29:19.387166 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 760164fe4759 19 hours ago 1.29GB 2025-06-02 20:29:19.387184 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 f5741b323fe9 19 hours ago 1.29GB 2025-06-02 20:29:19.387203 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 ef9c983c3ed3 19 hours ago 1.3GB 2025-06-02 20:29:19.387221 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 35396146c866 19 hours ago 1.42GB 2025-06-02 20:29:19.387240 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 70795d3e49ef 19 hours ago 1.15GB 2025-06-02 20:29:19.387259 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 de33a20e612e 19 hours ago 1.31GB 2025-06-02 20:29:19.387276 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 462af32e366a 19 hours ago 1.2GB 2025-06-02 20:29:19.387318 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 21905100e3ed 19 hours ago 1.06GB 2025-06-02 20:29:19.387354 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9c686edf4034 19 hours ago 1.06GB 2025-06-02 20:29:19.387374 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 e5000fc07327 19 hours ago 1.06GB 2025-06-02 20:29:19.387391 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 db5d29469dee 19 hours ago 1.1GB 2025-06-02 20:29:19.387407 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 47facbd328df 19 hours ago 1.1GB 2025-06-02 20:29:19.387423 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a89f287066ef 19 hours ago 1.12GB 2025-06-02 20:29:19.387439 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 1f4bea213a07 19 hours ago 1.1GB 2025-06-02 20:29:19.387455 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 97ff50a4c378 19 hours ago 1.12GB 2025-06-02 20:29:19.387520 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 c4ed2f5a2192 19 hours ago 1.11GB 2025-06-02 20:29:19.387538 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 ea224ddfbd63 19 hours ago 1.11GB 2025-06-02 20:29:19.387554 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 68b4a4b40b7c 19 hours ago 1.13GB 2025-06-02 20:29:19.387569 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8f7230e2e54a 19 hours ago 1.04GB 2025-06-02 20:29:19.387586 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3a64d65ac616 19 hours ago 1.05GB 2025-06-02 20:29:19.387602 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c3e9f7a9a34d 19 hours ago 1.05GB 2025-06-02 20:29:19.387619 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 99480384bf9d 19 hours ago 1.06GB 2025-06-02 20:29:19.387641 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 16d05b3fd708 19 hours ago 1.05GB 2025-06-02 20:29:19.387651 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5935e336ac71 19 hours ago 1.06GB 2025-06-02 20:29:19.387661 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 ad58c3a011c5 19 hours ago 1.05GB 2025-06-02 20:29:19.387671 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 321a68afc007 19 hours ago 1.25GB 2025-06-02 20:29:19.612618 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 20:29:19.612784 | orchestrator | ++ semver latest 5.0.0 2025-06-02 20:29:19.668757 | orchestrator | 2025-06-02 20:29:19.668880 | orchestrator | ## Containers @ testbed-node-2 2025-06-02 20:29:19.668898 | orchestrator | 2025-06-02 20:29:19.668910 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-02 20:29:19.668921 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 20:29:19.668932 | orchestrator | + echo 2025-06-02 20:29:19.668943 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-02 20:29:19.668980 | orchestrator | + echo 2025-06-02 20:29:19.669001 | orchestrator | + osism container testbed-node-2 ps 2025-06-02 20:29:21.779506 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 20:29:21.779579 | orchestrator | 17aa2b65f43c registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 20:29:21.779600 | orchestrator | 4722667014b9 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 20:29:21.779604 | orchestrator | b99d6712eb37 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 20:29:21.779608 | orchestrator | d35ce3526efd registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 20:29:21.779628 | orchestrator | 85f106cf79de registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-02 20:29:21.779632 | orchestrator | 23be19a9dddc registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-02 20:29:21.779636 | orchestrator | f63fb3be0479 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 20:29:21.779639 | orchestrator | 4ec36f565c2c registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-02 20:29:21.779643 | orchestrator | c15c277d9d41 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 20:29:21.779647 | orchestrator | 20910479b30b registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 20:29:21.779651 | orchestrator | 413a2dac970c registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 20:29:21.779654 | orchestrator | 7785e23ac138 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 20:29:21.780821 | orchestrator | b1b827f84921 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 20:29:21.780901 | orchestrator | 2c52fc55f477 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-06-02 20:29:21.780917 | orchestrator | e3b5e4e59da3 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 20:29:21.780929 | orchestrator | eda0e3de5a54 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-02 20:29:21.780941 | orchestrator | 9f5758e428b9 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-02 20:29:21.780952 | orchestrator | 6ac954e4c58c registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 20:29:21.780962 | orchestrator | f57cb0082ff0 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 20:29:21.780973 | orchestrator | 15308844e862 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-02 20:29:21.780984 | orchestrator | a36be8ef0045 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-02 20:29:21.780995 | orchestrator | c4532c003910 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-06-02 20:29:21.781006 | orchestrator | 3319b84ff171 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 20:29:21.781017 | orchestrator | d45e1bce7804 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 20:29:21.781047 | orchestrator | 6102d71dc712 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-02 20:29:21.781058 | orchestrator | 01a8872f9232 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 20:29:21.781083 | orchestrator | 25911ea88396 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-02 20:29:21.781095 | orchestrator | 583a943ace65 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 20:29:21.781106 | orchestrator | 3c705ac9ea1a registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 20:29:21.781117 | orchestrator | 113bc7f7e542 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-02 20:29:21.781128 | orchestrator | b917aa13d31c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-02 20:29:21.781139 | orchestrator | 0bcc70c76404 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-06-02 20:29:21.781149 | orchestrator | 87806259aa80 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-06-02 20:29:21.781160 | orchestrator | 16b3c314f366 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-06-02 20:29:21.781187 | orchestrator | 2d20ffa58f37 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-06-02 20:29:21.781199 | orchestrator | 380a5719ad26 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-02 20:29:21.781210 | orchestrator | 114c8584db62 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-06-02 20:29:21.781221 | orchestrator | 0e553e9332bf registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 19 minutes (healthy) mariadb 2025-06-02 20:29:21.781231 | orchestrator | 0ad5156ae894 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-06-02 20:29:21.781242 | orchestrator | 55a1de6b47f8 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-02 20:29:21.781253 | orchestrator | 28663beaeca7 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-06-02 20:29:21.781264 | orchestrator | 9f5081cb84ac registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-02 20:29:21.781274 | orchestrator | 14f5f36af613 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-02 20:29:21.781301 | orchestrator | eb1cf327f2d2 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-06-02 20:29:21.781312 | orchestrator | 1594b54bbc10 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-06-02 20:29:21.781323 | orchestrator | 9d69669b0f20 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-06-02 20:29:21.781334 | orchestrator | 607f57ba3e00 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-06-02 20:29:21.781344 | orchestrator | 3ce6b0dddcb1 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-06-02 20:29:21.781355 | orchestrator | 0a38bfe51fe0 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-06-02 20:29:21.781366 | orchestrator | 551fc475042a registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-02 20:29:21.781379 | orchestrator | 5fdf24afafc5 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-06-02 20:29:21.781391 | orchestrator | cf3696799d02 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-06-02 20:29:21.781404 | orchestrator | 420fd3c301f7 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-06-02 20:29:21.781416 | orchestrator | 8d7e76ed685c registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-06-02 20:29:21.781429 | orchestrator | ce39d2c66526 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-02 20:29:21.781441 | orchestrator | a896a17d9e04 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-06-02 20:29:21.781453 | orchestrator | 0eb507fc4851 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-02 20:29:22.036222 | orchestrator | 2025-06-02 20:29:22.036304 | orchestrator | ## Images @ testbed-node-2 2025-06-02 20:29:22.036312 | orchestrator | 2025-06-02 20:29:22.036319 | orchestrator | + echo 2025-06-02 20:29:22.036325 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-02 20:29:22.036332 | orchestrator | + echo 2025-06-02 20:29:22.036337 | orchestrator | + osism container testbed-node-2 images 2025-06-02 20:29:24.179378 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 20:29:24.179558 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 271b9d293e19 17 hours ago 1.27GB 2025-06-02 20:29:24.179575 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 392808c41677 19 hours ago 319MB 2025-06-02 20:29:24.179585 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 d83e4c60464a 19 hours ago 629MB 2025-06-02 20:29:24.179595 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 83dfa36b0b09 19 hours ago 376MB 2025-06-02 20:29:24.179605 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5b108bf8b06 19 hours ago 319MB 2025-06-02 20:29:24.179614 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 9534d2527bd9 19 hours ago 327MB 2025-06-02 20:29:24.179662 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 88f1dfbac042 19 hours ago 1.59GB 2025-06-02 20:29:24.179673 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 0f911db240a6 19 hours ago 1.01GB 2025-06-02 20:29:24.179683 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 307c7b2e9629 19 hours ago 1.55GB 2025-06-02 20:29:24.179693 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5b770fdbd519 19 hours ago 330MB 2025-06-02 20:29:24.179702 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d0f7c25d3497 19 hours ago 419MB 2025-06-02 20:29:24.179711 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d96ad4a06177 19 hours ago 747MB 2025-06-02 20:29:24.179721 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a4f9468e38ea 19 hours ago 325MB 2025-06-02 20:29:24.179730 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4b29449821be 19 hours ago 326MB 2025-06-02 20:29:24.179740 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 75af3084c3d1 19 hours ago 352MB 2025-06-02 20:29:24.179749 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b4411222e57e 19 hours ago 411MB 2025-06-02 20:29:24.179759 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 db5ce49c89cc 19 hours ago 345MB 2025-06-02 20:29:24.179768 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5134a96e4dfe 19 hours ago 359MB 2025-06-02 20:29:24.179777 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 03e0f3198b34 19 hours ago 354MB 2025-06-02 20:29:24.179787 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 8dfe63d220a5 19 hours ago 362MB 2025-06-02 20:29:24.179800 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 34548ea593f0 19 hours ago 362MB 2025-06-02 20:29:24.179810 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29ac703ff67c 19 hours ago 591MB 2025-06-02 20:29:24.179819 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 fe51ac78c8f1 19 hours ago 1.21GB 2025-06-02 20:29:24.179829 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 c4655637af6e 19 hours ago 947MB 2025-06-02 20:29:24.179838 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 501bf0c10100 19 hours ago 948MB 2025-06-02 20:29:24.179848 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 bff812ef8262 19 hours ago 948MB 2025-06-02 20:29:24.179857 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 e6e013a1a722 19 hours ago 947MB 2025-06-02 20:29:24.179866 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 23e5ad899301 19 hours ago 1.41GB 2025-06-02 20:29:24.179876 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 45b363b7482a 19 hours ago 1.41GB 2025-06-02 20:29:24.179885 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 760164fe4759 19 hours ago 1.29GB 2025-06-02 20:29:24.179897 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 f5741b323fe9 19 hours ago 1.29GB 2025-06-02 20:29:24.179908 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 ef9c983c3ed3 19 hours ago 1.3GB 2025-06-02 20:29:24.179919 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 35396146c866 19 hours ago 1.42GB 2025-06-02 20:29:24.179930 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 70795d3e49ef 19 hours ago 1.15GB 2025-06-02 20:29:24.179941 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 de33a20e612e 19 hours ago 1.31GB 2025-06-02 20:29:24.179958 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 462af32e366a 19 hours ago 1.2GB 2025-06-02 20:29:24.179986 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 21905100e3ed 19 hours ago 1.06GB 2025-06-02 20:29:24.179998 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9c686edf4034 19 hours ago 1.06GB 2025-06-02 20:29:24.180008 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 e5000fc07327 19 hours ago 1.06GB 2025-06-02 20:29:24.180019 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 db5d29469dee 19 hours ago 1.1GB 2025-06-02 20:29:24.180030 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 47facbd328df 19 hours ago 1.1GB 2025-06-02 20:29:24.180041 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a89f287066ef 19 hours ago 1.12GB 2025-06-02 20:29:24.180052 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 1f4bea213a07 19 hours ago 1.1GB 2025-06-02 20:29:24.180063 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 97ff50a4c378 19 hours ago 1.12GB 2025-06-02 20:29:24.180073 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 c4ed2f5a2192 19 hours ago 1.11GB 2025-06-02 20:29:24.180084 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 ea224ddfbd63 19 hours ago 1.11GB 2025-06-02 20:29:24.180095 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 68b4a4b40b7c 19 hours ago 1.13GB 2025-06-02 20:29:24.180106 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8f7230e2e54a 19 hours ago 1.04GB 2025-06-02 20:29:24.180117 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3a64d65ac616 19 hours ago 1.05GB 2025-06-02 20:29:24.180128 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c3e9f7a9a34d 19 hours ago 1.05GB 2025-06-02 20:29:24.180138 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 99480384bf9d 19 hours ago 1.06GB 2025-06-02 20:29:24.180149 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 16d05b3fd708 19 hours ago 1.05GB 2025-06-02 20:29:24.180160 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5935e336ac71 19 hours ago 1.06GB 2025-06-02 20:29:24.180171 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 ad58c3a011c5 19 hours ago 1.05GB 2025-06-02 20:29:24.180183 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 321a68afc007 19 hours ago 1.25GB 2025-06-02 20:29:24.400729 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-02 20:29:24.408795 | orchestrator | + set -e 2025-06-02 20:29:24.408898 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 20:29:24.409905 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 20:29:24.409949 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 20:29:24.409961 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 20:29:24.409972 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 20:29:24.409984 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 20:29:24.409997 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 20:29:24.410007 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 20:29:24.410077 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 20:29:24.410090 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 20:29:24.410101 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 20:29:24.410112 | orchestrator | ++ export ARA=false 2025-06-02 20:29:24.410123 | orchestrator | ++ ARA=false 2025-06-02 20:29:24.410134 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 20:29:24.410144 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 20:29:24.410161 | orchestrator | ++ export TEMPEST=false 2025-06-02 20:29:24.410172 | orchestrator | ++ TEMPEST=false 2025-06-02 20:29:24.410183 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 20:29:24.410194 | orchestrator | ++ IS_ZUUL=true 2025-06-02 20:29:24.410208 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 20:29:24.410227 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 20:29:24.410244 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 20:29:24.410293 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 20:29:24.410312 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 20:29:24.410329 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 20:29:24.410347 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 20:29:24.410365 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 20:29:24.410383 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 20:29:24.410403 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 20:29:24.410423 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 20:29:24.410442 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-02 20:29:24.416421 | orchestrator | + set -e 2025-06-02 20:29:24.416551 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 20:29:24.416564 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 20:29:24.416576 | orchestrator | ++ INTERACTIVE=false 2025-06-02 20:29:24.416587 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 20:29:24.416598 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 20:29:24.416609 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 20:29:24.417392 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 20:29:24.422521 | orchestrator | 2025-06-02 20:29:24.422601 | orchestrator | # Ceph status 2025-06-02 20:29:24.422615 | orchestrator | 2025-06-02 20:29:24.422627 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 20:29:24.422643 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 20:29:24.422662 | orchestrator | + echo 2025-06-02 20:29:24.422680 | orchestrator | + echo '# Ceph status' 2025-06-02 20:29:24.422696 | orchestrator | + echo 2025-06-02 20:29:24.422714 | orchestrator | + ceph -s 2025-06-02 20:29:25.002291 | orchestrator | cluster: 2025-06-02 20:29:25.002415 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-02 20:29:25.002433 | orchestrator | health: HEALTH_OK 2025-06-02 20:29:25.002445 | orchestrator | 2025-06-02 20:29:25.002516 | orchestrator | services: 2025-06-02 20:29:25.002539 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-06-02 20:29:25.002561 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-1, testbed-node-2 2025-06-02 20:29:25.002581 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-02 20:29:25.002600 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-06-02 20:29:25.002618 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-02 20:29:25.002636 | orchestrator | 2025-06-02 20:29:25.002654 | orchestrator | data: 2025-06-02 20:29:25.002673 | orchestrator | volumes: 1/1 healthy 2025-06-02 20:29:25.002691 | orchestrator | pools: 14 pools, 401 pgs 2025-06-02 20:29:25.002708 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-02 20:29:25.002752 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-02 20:29:25.002772 | orchestrator | pgs: 401 active+clean 2025-06-02 20:29:25.002791 | orchestrator | 2025-06-02 20:29:25.055922 | orchestrator | 2025-06-02 20:29:25.056023 | orchestrator | # Ceph versions 2025-06-02 20:29:25.056038 | orchestrator | 2025-06-02 20:29:25.056050 | orchestrator | + echo 2025-06-02 20:29:25.056062 | orchestrator | + echo '# Ceph versions' 2025-06-02 20:29:25.056074 | orchestrator | + echo 2025-06-02 20:29:25.056085 | orchestrator | + ceph versions 2025-06-02 20:29:25.632195 | orchestrator | { 2025-06-02 20:29:25.632320 | orchestrator | "mon": { 2025-06-02 20:29:25.632351 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 20:29:25.632373 | orchestrator | }, 2025-06-02 20:29:25.632391 | orchestrator | "mgr": { 2025-06-02 20:29:25.632412 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 20:29:25.632431 | orchestrator | }, 2025-06-02 20:29:25.632513 | orchestrator | "osd": { 2025-06-02 20:29:25.632535 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-02 20:29:25.632547 | orchestrator | }, 2025-06-02 20:29:25.632558 | orchestrator | "mds": { 2025-06-02 20:29:25.632569 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 20:29:25.632579 | orchestrator | }, 2025-06-02 20:29:25.632590 | orchestrator | "rgw": { 2025-06-02 20:29:25.632600 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 20:29:25.632611 | orchestrator | }, 2025-06-02 20:29:25.632622 | orchestrator | "overall": { 2025-06-02 20:29:25.632633 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-02 20:29:25.632671 | orchestrator | } 2025-06-02 20:29:25.632683 | orchestrator | } 2025-06-02 20:29:25.682705 | orchestrator | 2025-06-02 20:29:25.682799 | orchestrator | # Ceph OSD tree 2025-06-02 20:29:25.682813 | orchestrator | 2025-06-02 20:29:25.682823 | orchestrator | + echo 2025-06-02 20:29:25.682833 | orchestrator | + echo '# Ceph OSD tree' 2025-06-02 20:29:25.682844 | orchestrator | + echo 2025-06-02 20:29:25.682854 | orchestrator | + ceph osd df tree 2025-06-02 20:29:26.201378 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-02 20:29:26.201584 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-02 20:29:26.201614 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-02 20:29:26.201638 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.69 0.96 190 up osd.0 2025-06-02 20:29:26.201658 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.14 1.04 202 up osd.4 2025-06-02 20:29:26.201678 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-02 20:29:26.201697 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.43 1.09 184 up osd.1 2025-06-02 20:29:26.201717 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.40 0.91 204 up osd.3 2025-06-02 20:29:26.201736 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-02 20:29:26.201757 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.55 0.94 195 up osd.2 2025-06-02 20:29:26.201776 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.28 1.06 195 up osd.5 2025-06-02 20:29:26.201815 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-02 20:29:26.201834 | orchestrator | MIN/MAX VAR: 0.91/1.09 STDDEV: 0.39 2025-06-02 20:29:26.244932 | orchestrator | 2025-06-02 20:29:26.245029 | orchestrator | # Ceph monitor status 2025-06-02 20:29:26.245044 | orchestrator | 2025-06-02 20:29:26.245055 | orchestrator | + echo 2025-06-02 20:29:26.245067 | orchestrator | + echo '# Ceph monitor status' 2025-06-02 20:29:26.245078 | orchestrator | + echo 2025-06-02 20:29:26.245089 | orchestrator | + ceph mon stat 2025-06-02 20:29:26.780192 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-02 20:29:26.832410 | orchestrator | 2025-06-02 20:29:26.832556 | orchestrator | # Ceph quorum status 2025-06-02 20:29:26.832569 | orchestrator | 2025-06-02 20:29:26.832576 | orchestrator | + echo 2025-06-02 20:29:26.832582 | orchestrator | + echo '# Ceph quorum status' 2025-06-02 20:29:26.832588 | orchestrator | + echo 2025-06-02 20:29:26.833166 | orchestrator | + ceph quorum_status 2025-06-02 20:29:26.833189 | orchestrator | + jq 2025-06-02 20:29:27.466374 | orchestrator | { 2025-06-02 20:29:27.466561 | orchestrator | "election_epoch": 4, 2025-06-02 20:29:27.466580 | orchestrator | "quorum": [ 2025-06-02 20:29:27.466593 | orchestrator | 0, 2025-06-02 20:29:27.466604 | orchestrator | 1, 2025-06-02 20:29:27.466615 | orchestrator | 2 2025-06-02 20:29:27.466625 | orchestrator | ], 2025-06-02 20:29:27.466636 | orchestrator | "quorum_names": [ 2025-06-02 20:29:27.466647 | orchestrator | "testbed-node-0", 2025-06-02 20:29:27.466658 | orchestrator | "testbed-node-1", 2025-06-02 20:29:27.466669 | orchestrator | "testbed-node-2" 2025-06-02 20:29:27.466679 | orchestrator | ], 2025-06-02 20:29:27.466691 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-02 20:29:27.466712 | orchestrator | "quorum_age": 1628, 2025-06-02 20:29:27.466730 | orchestrator | "features": { 2025-06-02 20:29:27.466748 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-02 20:29:27.466765 | orchestrator | "quorum_mon": [ 2025-06-02 20:29:27.466830 | orchestrator | "kraken", 2025-06-02 20:29:27.466849 | orchestrator | "luminous", 2025-06-02 20:29:27.466866 | orchestrator | "mimic", 2025-06-02 20:29:27.466882 | orchestrator | "osdmap-prune", 2025-06-02 20:29:27.466899 | orchestrator | "nautilus", 2025-06-02 20:29:27.466916 | orchestrator | "octopus", 2025-06-02 20:29:27.466933 | orchestrator | "pacific", 2025-06-02 20:29:27.466954 | orchestrator | "elector-pinging", 2025-06-02 20:29:27.466972 | orchestrator | "quincy", 2025-06-02 20:29:27.466990 | orchestrator | "reef" 2025-06-02 20:29:27.467008 | orchestrator | ] 2025-06-02 20:29:27.467028 | orchestrator | }, 2025-06-02 20:29:27.467048 | orchestrator | "monmap": { 2025-06-02 20:29:27.467062 | orchestrator | "epoch": 1, 2025-06-02 20:29:27.467075 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-02 20:29:27.467088 | orchestrator | "modified": "2025-06-02T20:02:07.600154Z", 2025-06-02 20:29:27.467102 | orchestrator | "created": "2025-06-02T20:02:07.600154Z", 2025-06-02 20:29:27.467114 | orchestrator | "min_mon_release": 18, 2025-06-02 20:29:27.467126 | orchestrator | "min_mon_release_name": "reef", 2025-06-02 20:29:27.467139 | orchestrator | "election_strategy": 1, 2025-06-02 20:29:27.467151 | orchestrator | "disallowed_leaders: ": "", 2025-06-02 20:29:27.467163 | orchestrator | "stretch_mode": false, 2025-06-02 20:29:27.467176 | orchestrator | "tiebreaker_mon": "", 2025-06-02 20:29:27.467188 | orchestrator | "removed_ranks: ": "", 2025-06-02 20:29:27.467201 | orchestrator | "features": { 2025-06-02 20:29:27.467212 | orchestrator | "persistent": [ 2025-06-02 20:29:27.467224 | orchestrator | "kraken", 2025-06-02 20:29:27.467236 | orchestrator | "luminous", 2025-06-02 20:29:27.467247 | orchestrator | "mimic", 2025-06-02 20:29:27.467259 | orchestrator | "osdmap-prune", 2025-06-02 20:29:27.467271 | orchestrator | "nautilus", 2025-06-02 20:29:27.467283 | orchestrator | "octopus", 2025-06-02 20:29:27.467295 | orchestrator | "pacific", 2025-06-02 20:29:27.467308 | orchestrator | "elector-pinging", 2025-06-02 20:29:27.467320 | orchestrator | "quincy", 2025-06-02 20:29:27.467332 | orchestrator | "reef" 2025-06-02 20:29:27.467345 | orchestrator | ], 2025-06-02 20:29:27.467356 | orchestrator | "optional": [] 2025-06-02 20:29:27.467366 | orchestrator | }, 2025-06-02 20:29:27.467378 | orchestrator | "mons": [ 2025-06-02 20:29:27.467389 | orchestrator | { 2025-06-02 20:29:27.467400 | orchestrator | "rank": 0, 2025-06-02 20:29:27.467410 | orchestrator | "name": "testbed-node-0", 2025-06-02 20:29:27.467421 | orchestrator | "public_addrs": { 2025-06-02 20:29:27.467432 | orchestrator | "addrvec": [ 2025-06-02 20:29:27.467443 | orchestrator | { 2025-06-02 20:29:27.467480 | orchestrator | "type": "v2", 2025-06-02 20:29:27.467492 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-02 20:29:27.467503 | orchestrator | "nonce": 0 2025-06-02 20:29:27.467514 | orchestrator | }, 2025-06-02 20:29:27.467525 | orchestrator | { 2025-06-02 20:29:27.467535 | orchestrator | "type": "v1", 2025-06-02 20:29:27.467546 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-02 20:29:27.467556 | orchestrator | "nonce": 0 2025-06-02 20:29:27.467567 | orchestrator | } 2025-06-02 20:29:27.467578 | orchestrator | ] 2025-06-02 20:29:27.467588 | orchestrator | }, 2025-06-02 20:29:27.467599 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-02 20:29:27.467610 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-02 20:29:27.467620 | orchestrator | "priority": 0, 2025-06-02 20:29:27.467631 | orchestrator | "weight": 0, 2025-06-02 20:29:27.467642 | orchestrator | "crush_location": "{}" 2025-06-02 20:29:27.467652 | orchestrator | }, 2025-06-02 20:29:27.467664 | orchestrator | { 2025-06-02 20:29:27.467681 | orchestrator | "rank": 1, 2025-06-02 20:29:27.467700 | orchestrator | "name": "testbed-node-1", 2025-06-02 20:29:27.467716 | orchestrator | "public_addrs": { 2025-06-02 20:29:27.467744 | orchestrator | "addrvec": [ 2025-06-02 20:29:27.467763 | orchestrator | { 2025-06-02 20:29:27.467780 | orchestrator | "type": "v2", 2025-06-02 20:29:27.467797 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-02 20:29:27.467813 | orchestrator | "nonce": 0 2025-06-02 20:29:27.467828 | orchestrator | }, 2025-06-02 20:29:27.467845 | orchestrator | { 2025-06-02 20:29:27.467864 | orchestrator | "type": "v1", 2025-06-02 20:29:27.467881 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-02 20:29:27.467901 | orchestrator | "nonce": 0 2025-06-02 20:29:27.467918 | orchestrator | } 2025-06-02 20:29:27.467936 | orchestrator | ] 2025-06-02 20:29:27.467960 | orchestrator | }, 2025-06-02 20:29:27.467971 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-02 20:29:27.467982 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-02 20:29:27.467992 | orchestrator | "priority": 0, 2025-06-02 20:29:27.468003 | orchestrator | "weight": 0, 2025-06-02 20:29:27.468013 | orchestrator | "crush_location": "{}" 2025-06-02 20:29:27.468024 | orchestrator | }, 2025-06-02 20:29:27.468035 | orchestrator | { 2025-06-02 20:29:27.468045 | orchestrator | "rank": 2, 2025-06-02 20:29:27.468056 | orchestrator | "name": "testbed-node-2", 2025-06-02 20:29:27.468067 | orchestrator | "public_addrs": { 2025-06-02 20:29:27.468077 | orchestrator | "addrvec": [ 2025-06-02 20:29:27.468088 | orchestrator | { 2025-06-02 20:29:27.468098 | orchestrator | "type": "v2", 2025-06-02 20:29:27.468109 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-02 20:29:27.468120 | orchestrator | "nonce": 0 2025-06-02 20:29:27.468130 | orchestrator | }, 2025-06-02 20:29:27.468141 | orchestrator | { 2025-06-02 20:29:27.468152 | orchestrator | "type": "v1", 2025-06-02 20:29:27.468162 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-02 20:29:27.468173 | orchestrator | "nonce": 0 2025-06-02 20:29:27.468184 | orchestrator | } 2025-06-02 20:29:27.468194 | orchestrator | ] 2025-06-02 20:29:27.468205 | orchestrator | }, 2025-06-02 20:29:27.468216 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-02 20:29:27.468226 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-02 20:29:27.468237 | orchestrator | "priority": 0, 2025-06-02 20:29:27.468247 | orchestrator | "weight": 0, 2025-06-02 20:29:27.468258 | orchestrator | "crush_location": "{}" 2025-06-02 20:29:27.468269 | orchestrator | } 2025-06-02 20:29:27.468279 | orchestrator | ] 2025-06-02 20:29:27.468290 | orchestrator | } 2025-06-02 20:29:27.468301 | orchestrator | } 2025-06-02 20:29:27.468312 | orchestrator | 2025-06-02 20:29:27.468323 | orchestrator | # Ceph free space status 2025-06-02 20:29:27.468334 | orchestrator | + echo 2025-06-02 20:29:27.468345 | orchestrator | + echo '# Ceph free space status' 2025-06-02 20:29:27.468355 | orchestrator | 2025-06-02 20:29:27.468366 | orchestrator | + echo 2025-06-02 20:29:27.468377 | orchestrator | + ceph df 2025-06-02 20:29:28.064315 | orchestrator | --- RAW STORAGE --- 2025-06-02 20:29:28.064392 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-02 20:29:28.064410 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-02 20:29:28.064415 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-02 20:29:28.064421 | orchestrator | 2025-06-02 20:29:28.064426 | orchestrator | --- POOLS --- 2025-06-02 20:29:28.064432 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-02 20:29:28.064438 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-02 20:29:28.064443 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-02 20:29:28.064448 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-02 20:29:28.064521 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-02 20:29:28.064527 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-02 20:29:28.064532 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-02 20:29:28.064536 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-02 20:29:28.064541 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-02 20:29:28.064546 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-06-02 20:29:28.064551 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 20:29:28.064555 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 20:29:28.064560 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.90 35 GiB 2025-06-02 20:29:28.064564 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 20:29:28.064569 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 20:29:28.106653 | orchestrator | ++ semver latest 5.0.0 2025-06-02 20:29:28.161126 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-02 20:29:28.161237 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 20:29:28.161256 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-02 20:29:28.161276 | orchestrator | + osism apply facts 2025-06-02 20:29:29.848609 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:29:29.848703 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:29:29.848712 | orchestrator | Registering Redlock._release_script 2025-06-02 20:29:29.906321 | orchestrator | 2025-06-02 20:29:29 | INFO  | Task 602ae98d-682b-48ff-96dd-881f4dda238c (facts) was prepared for execution. 2025-06-02 20:29:29.906396 | orchestrator | 2025-06-02 20:29:29 | INFO  | It takes a moment until task 602ae98d-682b-48ff-96dd-881f4dda238c (facts) has been started and output is visible here. 2025-06-02 20:29:33.979938 | orchestrator | 2025-06-02 20:29:33.980616 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 20:29:33.982071 | orchestrator | 2025-06-02 20:29:33.986265 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 20:29:33.987125 | orchestrator | Monday 02 June 2025 20:29:33 +0000 (0:00:00.270) 0:00:00.270 *********** 2025-06-02 20:29:35.503204 | orchestrator | ok: [testbed-manager] 2025-06-02 20:29:35.503957 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:35.508151 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:29:35.508721 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:29:35.509572 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:35.510425 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:35.512859 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:35.514395 | orchestrator | 2025-06-02 20:29:35.518292 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 20:29:35.518792 | orchestrator | Monday 02 June 2025 20:29:35 +0000 (0:00:01.521) 0:00:01.791 *********** 2025-06-02 20:29:35.683846 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:29:35.767116 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:35.844580 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:29:35.921132 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:29:35.999484 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:36.732084 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:36.732218 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:36.732815 | orchestrator | 2025-06-02 20:29:36.735028 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 20:29:36.736031 | orchestrator | 2025-06-02 20:29:36.736957 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 20:29:36.737848 | orchestrator | Monday 02 June 2025 20:29:36 +0000 (0:00:01.229) 0:00:03.020 *********** 2025-06-02 20:29:43.422604 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:29:43.423702 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:29:43.428336 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:43.428412 | orchestrator | ok: [testbed-manager] 2025-06-02 20:29:43.428425 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:43.429547 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:43.429582 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:43.430734 | orchestrator | 2025-06-02 20:29:43.431524 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 20:29:43.432216 | orchestrator | 2025-06-02 20:29:43.432889 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 20:29:43.433395 | orchestrator | Monday 02 June 2025 20:29:43 +0000 (0:00:06.694) 0:00:09.714 *********** 2025-06-02 20:29:43.591398 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:29:43.671097 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:43.752765 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:29:43.835399 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:29:43.916615 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:43.956477 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:43.957054 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:43.958408 | orchestrator | 2025-06-02 20:29:43.959278 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:29:43.959817 | orchestrator | 2025-06-02 20:29:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 20:29:43.959868 | orchestrator | 2025-06-02 20:29:43 | INFO  | Please wait and do not abort execution. 2025-06-02 20:29:43.960757 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:29:43.961536 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:29:43.962136 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:29:43.962791 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:29:43.963422 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:29:43.963870 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:29:43.964637 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:29:43.964809 | orchestrator | 2025-06-02 20:29:43.965543 | orchestrator | 2025-06-02 20:29:43.965786 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:29:43.966334 | orchestrator | Monday 02 June 2025 20:29:43 +0000 (0:00:00.535) 0:00:10.249 *********** 2025-06-02 20:29:43.966857 | orchestrator | =============================================================================== 2025-06-02 20:29:43.967212 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.69s 2025-06-02 20:29:43.967782 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.52s 2025-06-02 20:29:43.968552 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2025-06-02 20:29:43.969665 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-06-02 20:29:44.642216 | orchestrator | + osism validate ceph-mons 2025-06-02 20:29:46.314664 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:29:46.314785 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:29:46.314814 | orchestrator | Registering Redlock._release_script 2025-06-02 20:30:05.319956 | orchestrator | 2025-06-02 20:30:05.320044 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-02 20:30:05.320055 | orchestrator | 2025-06-02 20:30:05.320065 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 20:30:05.320073 | orchestrator | Monday 02 June 2025 20:29:50 +0000 (0:00:00.452) 0:00:00.452 *********** 2025-06-02 20:30:05.320081 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:05.320087 | orchestrator | 2025-06-02 20:30:05.320094 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 20:30:05.320100 | orchestrator | Monday 02 June 2025 20:29:51 +0000 (0:00:00.566) 0:00:01.019 *********** 2025-06-02 20:30:05.320106 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:05.320113 | orchestrator | 2025-06-02 20:30:05.320120 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 20:30:05.320127 | orchestrator | Monday 02 June 2025 20:29:51 +0000 (0:00:00.635) 0:00:01.654 *********** 2025-06-02 20:30:05.320132 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320138 | orchestrator | 2025-06-02 20:30:05.320142 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 20:30:05.320146 | orchestrator | Monday 02 June 2025 20:29:52 +0000 (0:00:00.184) 0:00:01.838 *********** 2025-06-02 20:30:05.320150 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320175 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:05.320182 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:05.320188 | orchestrator | 2025-06-02 20:30:05.320194 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 20:30:05.320210 | orchestrator | Monday 02 June 2025 20:29:52 +0000 (0:00:00.260) 0:00:02.098 *********** 2025-06-02 20:30:05.320214 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:05.320218 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320221 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:05.320225 | orchestrator | 2025-06-02 20:30:05.320229 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 20:30:05.320235 | orchestrator | Monday 02 June 2025 20:29:53 +0000 (0:00:00.948) 0:00:03.047 *********** 2025-06-02 20:30:05.320241 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.320248 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:30:05.320254 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:30:05.320259 | orchestrator | 2025-06-02 20:30:05.320266 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 20:30:05.320272 | orchestrator | Monday 02 June 2025 20:29:53 +0000 (0:00:00.255) 0:00:03.303 *********** 2025-06-02 20:30:05.320279 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320285 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:05.320292 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:05.320298 | orchestrator | 2025-06-02 20:30:05.320305 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:30:05.320311 | orchestrator | Monday 02 June 2025 20:29:53 +0000 (0:00:00.380) 0:00:03.683 *********** 2025-06-02 20:30:05.320317 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320321 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:05.320325 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:05.320330 | orchestrator | 2025-06-02 20:30:05.320336 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-02 20:30:05.320342 | orchestrator | Monday 02 June 2025 20:29:54 +0000 (0:00:00.255) 0:00:03.939 *********** 2025-06-02 20:30:05.320349 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.320355 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:30:05.320360 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:30:05.320366 | orchestrator | 2025-06-02 20:30:05.320371 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-02 20:30:05.320378 | orchestrator | Monday 02 June 2025 20:29:54 +0000 (0:00:00.280) 0:00:04.219 *********** 2025-06-02 20:30:05.320385 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320393 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:05.320397 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:05.320400 | orchestrator | 2025-06-02 20:30:05.320565 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:30:05.320573 | orchestrator | Monday 02 June 2025 20:29:54 +0000 (0:00:00.325) 0:00:04.545 *********** 2025-06-02 20:30:05.320580 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.320586 | orchestrator | 2025-06-02 20:30:05.320592 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:30:05.320599 | orchestrator | Monday 02 June 2025 20:29:55 +0000 (0:00:00.698) 0:00:05.243 *********** 2025-06-02 20:30:05.320606 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.320613 | orchestrator | 2025-06-02 20:30:05.320619 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:30:05.320625 | orchestrator | Monday 02 June 2025 20:29:55 +0000 (0:00:00.242) 0:00:05.485 *********** 2025-06-02 20:30:05.320632 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.320638 | orchestrator | 2025-06-02 20:30:05.320645 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:05.320651 | orchestrator | Monday 02 June 2025 20:29:55 +0000 (0:00:00.247) 0:00:05.733 *********** 2025-06-02 20:30:05.320658 | orchestrator | 2025-06-02 20:30:05.320666 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:05.320687 | orchestrator | Monday 02 June 2025 20:29:56 +0000 (0:00:00.068) 0:00:05.801 *********** 2025-06-02 20:30:05.320694 | orchestrator | 2025-06-02 20:30:05.320700 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:05.320707 | orchestrator | Monday 02 June 2025 20:29:56 +0000 (0:00:00.069) 0:00:05.870 *********** 2025-06-02 20:30:05.320714 | orchestrator | 2025-06-02 20:30:05.320721 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:30:05.320727 | orchestrator | Monday 02 June 2025 20:29:56 +0000 (0:00:00.071) 0:00:05.942 *********** 2025-06-02 20:30:05.320734 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.320740 | orchestrator | 2025-06-02 20:30:05.320747 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 20:30:05.320754 | orchestrator | Monday 02 June 2025 20:29:56 +0000 (0:00:00.252) 0:00:06.194 *********** 2025-06-02 20:30:05.320760 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.320767 | orchestrator | 2025-06-02 20:30:05.320791 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-02 20:30:05.320798 | orchestrator | Monday 02 June 2025 20:29:56 +0000 (0:00:00.287) 0:00:06.481 *********** 2025-06-02 20:30:05.320805 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320812 | orchestrator | 2025-06-02 20:30:05.320818 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-02 20:30:05.320826 | orchestrator | Monday 02 June 2025 20:29:56 +0000 (0:00:00.126) 0:00:06.608 *********** 2025-06-02 20:30:05.320832 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:30:05.320839 | orchestrator | 2025-06-02 20:30:05.320846 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-02 20:30:05.320852 | orchestrator | Monday 02 June 2025 20:29:58 +0000 (0:00:01.568) 0:00:08.177 *********** 2025-06-02 20:30:05.320859 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320866 | orchestrator | 2025-06-02 20:30:05.320873 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-02 20:30:05.320881 | orchestrator | Monday 02 June 2025 20:29:58 +0000 (0:00:00.293) 0:00:08.470 *********** 2025-06-02 20:30:05.320886 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.320890 | orchestrator | 2025-06-02 20:30:05.320894 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-02 20:30:05.320898 | orchestrator | Monday 02 June 2025 20:29:59 +0000 (0:00:00.307) 0:00:08.778 *********** 2025-06-02 20:30:05.320901 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320905 | orchestrator | 2025-06-02 20:30:05.320909 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-02 20:30:05.320913 | orchestrator | Monday 02 June 2025 20:29:59 +0000 (0:00:00.339) 0:00:09.118 *********** 2025-06-02 20:30:05.320929 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320935 | orchestrator | 2025-06-02 20:30:05.320941 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-02 20:30:05.320947 | orchestrator | Monday 02 June 2025 20:29:59 +0000 (0:00:00.327) 0:00:09.445 *********** 2025-06-02 20:30:05.320953 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.320959 | orchestrator | 2025-06-02 20:30:05.320966 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-02 20:30:05.320972 | orchestrator | Monday 02 June 2025 20:29:59 +0000 (0:00:00.109) 0:00:09.554 *********** 2025-06-02 20:30:05.320979 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.320987 | orchestrator | 2025-06-02 20:30:05.320993 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-02 20:30:05.321001 | orchestrator | Monday 02 June 2025 20:29:59 +0000 (0:00:00.134) 0:00:09.689 *********** 2025-06-02 20:30:05.321006 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.321011 | orchestrator | 2025-06-02 20:30:05.321017 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-02 20:30:05.321023 | orchestrator | Monday 02 June 2025 20:30:00 +0000 (0:00:00.106) 0:00:09.795 *********** 2025-06-02 20:30:05.321029 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:30:05.321044 | orchestrator | 2025-06-02 20:30:05.321050 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-02 20:30:05.321057 | orchestrator | Monday 02 June 2025 20:30:01 +0000 (0:00:01.404) 0:00:11.200 *********** 2025-06-02 20:30:05.321063 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.321066 | orchestrator | 2025-06-02 20:30:05.321070 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-02 20:30:05.321074 | orchestrator | Monday 02 June 2025 20:30:01 +0000 (0:00:00.290) 0:00:11.490 *********** 2025-06-02 20:30:05.321078 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.321082 | orchestrator | 2025-06-02 20:30:05.321085 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-02 20:30:05.321089 | orchestrator | Monday 02 June 2025 20:30:01 +0000 (0:00:00.139) 0:00:11.630 *********** 2025-06-02 20:30:05.321093 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:05.321096 | orchestrator | 2025-06-02 20:30:05.321102 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-02 20:30:05.321108 | orchestrator | Monday 02 June 2025 20:30:02 +0000 (0:00:00.139) 0:00:11.769 *********** 2025-06-02 20:30:05.321113 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.321119 | orchestrator | 2025-06-02 20:30:05.321125 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-02 20:30:05.321135 | orchestrator | Monday 02 June 2025 20:30:02 +0000 (0:00:00.147) 0:00:11.917 *********** 2025-06-02 20:30:05.321141 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.321146 | orchestrator | 2025-06-02 20:30:05.321152 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 20:30:05.321157 | orchestrator | Monday 02 June 2025 20:30:02 +0000 (0:00:00.319) 0:00:12.236 *********** 2025-06-02 20:30:05.321163 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:05.321170 | orchestrator | 2025-06-02 20:30:05.321176 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 20:30:05.321183 | orchestrator | Monday 02 June 2025 20:30:02 +0000 (0:00:00.247) 0:00:12.484 *********** 2025-06-02 20:30:05.321188 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:05.321192 | orchestrator | 2025-06-02 20:30:05.321196 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:30:05.321200 | orchestrator | Monday 02 June 2025 20:30:02 +0000 (0:00:00.232) 0:00:12.716 *********** 2025-06-02 20:30:05.321203 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:05.321207 | orchestrator | 2025-06-02 20:30:05.321211 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:30:05.321215 | orchestrator | Monday 02 June 2025 20:30:04 +0000 (0:00:01.581) 0:00:14.298 *********** 2025-06-02 20:30:05.321222 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:05.321226 | orchestrator | 2025-06-02 20:30:05.321230 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:30:05.321234 | orchestrator | Monday 02 June 2025 20:30:04 +0000 (0:00:00.287) 0:00:14.585 *********** 2025-06-02 20:30:05.321238 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:05.321242 | orchestrator | 2025-06-02 20:30:05.321253 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:07.483465 | orchestrator | Monday 02 June 2025 20:30:05 +0000 (0:00:00.252) 0:00:14.837 *********** 2025-06-02 20:30:07.483595 | orchestrator | 2025-06-02 20:30:07.483612 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:07.483624 | orchestrator | Monday 02 June 2025 20:30:05 +0000 (0:00:00.072) 0:00:14.910 *********** 2025-06-02 20:30:07.483635 | orchestrator | 2025-06-02 20:30:07.483647 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:07.483657 | orchestrator | Monday 02 June 2025 20:30:05 +0000 (0:00:00.069) 0:00:14.979 *********** 2025-06-02 20:30:07.483668 | orchestrator | 2025-06-02 20:30:07.483705 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 20:30:07.483716 | orchestrator | Monday 02 June 2025 20:30:05 +0000 (0:00:00.072) 0:00:15.051 *********** 2025-06-02 20:30:07.483728 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:07.483739 | orchestrator | 2025-06-02 20:30:07.483749 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:30:07.483760 | orchestrator | Monday 02 June 2025 20:30:06 +0000 (0:00:01.273) 0:00:16.324 *********** 2025-06-02 20:30:07.483770 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 20:30:07.483781 | orchestrator |  "msg": [ 2025-06-02 20:30:07.483793 | orchestrator |  "Validator run completed.", 2025-06-02 20:30:07.483804 | orchestrator |  "You can find the report file here:", 2025-06-02 20:30:07.483815 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-02T20:29:51+00:00-report.json", 2025-06-02 20:30:07.483834 | orchestrator |  "on the following host:", 2025-06-02 20:30:07.483852 | orchestrator |  "testbed-manager" 2025-06-02 20:30:07.483882 | orchestrator |  ] 2025-06-02 20:30:07.483901 | orchestrator | } 2025-06-02 20:30:07.483918 | orchestrator | 2025-06-02 20:30:07.483936 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:30:07.483957 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-02 20:30:07.483978 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:30:07.483999 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:30:07.484017 | orchestrator | 2025-06-02 20:30:07.484034 | orchestrator | 2025-06-02 20:30:07.484053 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:30:07.484071 | orchestrator | Monday 02 June 2025 20:30:07 +0000 (0:00:00.594) 0:00:16.919 *********** 2025-06-02 20:30:07.484088 | orchestrator | =============================================================================== 2025-06-02 20:30:07.484107 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2025-06-02 20:30:07.484125 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.57s 2025-06-02 20:30:07.484144 | orchestrator | Gather status data ------------------------------------------------------ 1.40s 2025-06-02 20:30:07.484164 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2025-06-02 20:30:07.484184 | orchestrator | Get container info ------------------------------------------------------ 0.95s 2025-06-02 20:30:07.484196 | orchestrator | Aggregate test results step one ----------------------------------------- 0.70s 2025-06-02 20:30:07.484209 | orchestrator | Create report output directory ------------------------------------------ 0.64s 2025-06-02 20:30:07.484223 | orchestrator | Print report file information ------------------------------------------- 0.59s 2025-06-02 20:30:07.484239 | orchestrator | Get timestamp for report file ------------------------------------------- 0.57s 2025-06-02 20:30:07.484266 | orchestrator | Set test result to passed if container is existing ---------------------- 0.38s 2025-06-02 20:30:07.484306 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2025-06-02 20:30:07.484325 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2025-06-02 20:30:07.484343 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.33s 2025-06-02 20:30:07.484360 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.32s 2025-06-02 20:30:07.484379 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.31s 2025-06-02 20:30:07.484397 | orchestrator | Set quorum test data ---------------------------------------------------- 0.29s 2025-06-02 20:30:07.484444 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2025-06-02 20:30:07.484469 | orchestrator | Fail due to missing containers ------------------------------------------ 0.29s 2025-06-02 20:30:07.484480 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2025-06-02 20:30:07.484491 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.28s 2025-06-02 20:30:07.750115 | orchestrator | + osism validate ceph-mgrs 2025-06-02 20:30:09.474564 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:30:09.475777 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:30:09.475863 | orchestrator | Registering Redlock._release_script 2025-06-02 20:30:28.826557 | orchestrator | 2025-06-02 20:30:28.826686 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-02 20:30:28.826711 | orchestrator | 2025-06-02 20:30:28.826724 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 20:30:28.826735 | orchestrator | Monday 02 June 2025 20:30:13 +0000 (0:00:00.428) 0:00:00.428 *********** 2025-06-02 20:30:28.826745 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:28.826755 | orchestrator | 2025-06-02 20:30:28.826765 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 20:30:28.826775 | orchestrator | Monday 02 June 2025 20:30:14 +0000 (0:00:00.662) 0:00:01.091 *********** 2025-06-02 20:30:28.826784 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:28.826794 | orchestrator | 2025-06-02 20:30:28.826803 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 20:30:28.826813 | orchestrator | Monday 02 June 2025 20:30:15 +0000 (0:00:00.816) 0:00:01.907 *********** 2025-06-02 20:30:28.826823 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:28.826833 | orchestrator | 2025-06-02 20:30:28.826844 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 20:30:28.826853 | orchestrator | Monday 02 June 2025 20:30:15 +0000 (0:00:00.243) 0:00:02.150 *********** 2025-06-02 20:30:28.826863 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:28.826873 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:28.826882 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:28.826893 | orchestrator | 2025-06-02 20:30:28.826908 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 20:30:28.826925 | orchestrator | Monday 02 June 2025 20:30:15 +0000 (0:00:00.290) 0:00:02.441 *********** 2025-06-02 20:30:28.826941 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:28.826957 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:28.826973 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:28.826987 | orchestrator | 2025-06-02 20:30:28.826999 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 20:30:28.827010 | orchestrator | Monday 02 June 2025 20:30:16 +0000 (0:00:01.056) 0:00:03.498 *********** 2025-06-02 20:30:28.827026 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:28.827045 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:30:28.827062 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:30:28.827073 | orchestrator | 2025-06-02 20:30:28.827084 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 20:30:28.827096 | orchestrator | Monday 02 June 2025 20:30:17 +0000 (0:00:00.288) 0:00:03.787 *********** 2025-06-02 20:30:28.827107 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:28.827119 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:28.827130 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:28.827141 | orchestrator | 2025-06-02 20:30:28.827152 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:30:28.827162 | orchestrator | Monday 02 June 2025 20:30:17 +0000 (0:00:00.491) 0:00:04.278 *********** 2025-06-02 20:30:28.827171 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:28.827181 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:28.827190 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:28.827200 | orchestrator | 2025-06-02 20:30:28.827209 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-02 20:30:28.827246 | orchestrator | Monday 02 June 2025 20:30:17 +0000 (0:00:00.312) 0:00:04.590 *********** 2025-06-02 20:30:28.827256 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:28.827266 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:30:28.827275 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:30:28.827285 | orchestrator | 2025-06-02 20:30:28.827294 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-02 20:30:28.827304 | orchestrator | Monday 02 June 2025 20:30:18 +0000 (0:00:00.299) 0:00:04.890 *********** 2025-06-02 20:30:28.827314 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:28.827323 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:28.827332 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:28.827342 | orchestrator | 2025-06-02 20:30:28.827351 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:30:28.827361 | orchestrator | Monday 02 June 2025 20:30:18 +0000 (0:00:00.300) 0:00:05.190 *********** 2025-06-02 20:30:28.827371 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:28.827380 | orchestrator | 2025-06-02 20:30:28.827413 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:30:28.827425 | orchestrator | Monday 02 June 2025 20:30:19 +0000 (0:00:00.802) 0:00:05.993 *********** 2025-06-02 20:30:28.827435 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:28.827444 | orchestrator | 2025-06-02 20:30:28.827454 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:30:28.827463 | orchestrator | Monday 02 June 2025 20:30:19 +0000 (0:00:00.270) 0:00:06.264 *********** 2025-06-02 20:30:28.827473 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:28.827482 | orchestrator | 2025-06-02 20:30:28.827492 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:28.827502 | orchestrator | Monday 02 June 2025 20:30:19 +0000 (0:00:00.254) 0:00:06.518 *********** 2025-06-02 20:30:28.827511 | orchestrator | 2025-06-02 20:30:28.827521 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:28.827546 | orchestrator | Monday 02 June 2025 20:30:19 +0000 (0:00:00.066) 0:00:06.585 *********** 2025-06-02 20:30:28.827556 | orchestrator | 2025-06-02 20:30:28.827566 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:28.827575 | orchestrator | Monday 02 June 2025 20:30:20 +0000 (0:00:00.069) 0:00:06.655 *********** 2025-06-02 20:30:28.827588 | orchestrator | 2025-06-02 20:30:28.827604 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:30:28.827621 | orchestrator | Monday 02 June 2025 20:30:20 +0000 (0:00:00.072) 0:00:06.727 *********** 2025-06-02 20:30:28.827637 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:28.827653 | orchestrator | 2025-06-02 20:30:28.827668 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 20:30:28.827686 | orchestrator | Monday 02 June 2025 20:30:20 +0000 (0:00:00.283) 0:00:07.011 *********** 2025-06-02 20:30:28.827704 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:28.827720 | orchestrator | 2025-06-02 20:30:28.827752 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-02 20:30:28.827763 | orchestrator | Monday 02 June 2025 20:30:20 +0000 (0:00:00.245) 0:00:07.257 *********** 2025-06-02 20:30:28.827772 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:28.827782 | orchestrator | 2025-06-02 20:30:28.827791 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-02 20:30:28.827800 | orchestrator | Monday 02 June 2025 20:30:20 +0000 (0:00:00.111) 0:00:07.369 *********** 2025-06-02 20:30:28.827810 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:30:28.827819 | orchestrator | 2025-06-02 20:30:28.827829 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-02 20:30:28.827838 | orchestrator | Monday 02 June 2025 20:30:22 +0000 (0:00:02.026) 0:00:09.396 *********** 2025-06-02 20:30:28.827847 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:28.827857 | orchestrator | 2025-06-02 20:30:28.827866 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-02 20:30:28.827885 | orchestrator | Monday 02 June 2025 20:30:23 +0000 (0:00:00.245) 0:00:09.642 *********** 2025-06-02 20:30:28.827894 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:28.827903 | orchestrator | 2025-06-02 20:30:28.827913 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-02 20:30:28.827922 | orchestrator | Monday 02 June 2025 20:30:23 +0000 (0:00:00.474) 0:00:10.117 *********** 2025-06-02 20:30:28.827932 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:28.827941 | orchestrator | 2025-06-02 20:30:28.827950 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-02 20:30:28.827959 | orchestrator | Monday 02 June 2025 20:30:23 +0000 (0:00:00.148) 0:00:10.265 *********** 2025-06-02 20:30:28.827969 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:28.827978 | orchestrator | 2025-06-02 20:30:28.827987 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 20:30:28.827997 | orchestrator | Monday 02 June 2025 20:30:23 +0000 (0:00:00.143) 0:00:10.409 *********** 2025-06-02 20:30:28.828006 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:28.828016 | orchestrator | 2025-06-02 20:30:28.828025 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 20:30:28.828035 | orchestrator | Monday 02 June 2025 20:30:24 +0000 (0:00:00.244) 0:00:10.653 *********** 2025-06-02 20:30:28.828044 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:30:28.828054 | orchestrator | 2025-06-02 20:30:28.828063 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:30:28.828072 | orchestrator | Monday 02 June 2025 20:30:24 +0000 (0:00:00.313) 0:00:10.967 *********** 2025-06-02 20:30:28.828081 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:28.828091 | orchestrator | 2025-06-02 20:30:28.828100 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:30:28.828109 | orchestrator | Monday 02 June 2025 20:30:25 +0000 (0:00:01.420) 0:00:12.388 *********** 2025-06-02 20:30:28.828119 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:28.828128 | orchestrator | 2025-06-02 20:30:28.828137 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:30:28.828147 | orchestrator | Monday 02 June 2025 20:30:26 +0000 (0:00:00.257) 0:00:12.646 *********** 2025-06-02 20:30:28.828157 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:28.828168 | orchestrator | 2025-06-02 20:30:28.828178 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:28.828189 | orchestrator | Monday 02 June 2025 20:30:26 +0000 (0:00:00.281) 0:00:12.927 *********** 2025-06-02 20:30:28.828199 | orchestrator | 2025-06-02 20:30:28.828210 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:28.828220 | orchestrator | Monday 02 June 2025 20:30:26 +0000 (0:00:00.070) 0:00:12.998 *********** 2025-06-02 20:30:28.828231 | orchestrator | 2025-06-02 20:30:28.828242 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:28.828252 | orchestrator | Monday 02 June 2025 20:30:26 +0000 (0:00:00.068) 0:00:13.066 *********** 2025-06-02 20:30:28.828263 | orchestrator | 2025-06-02 20:30:28.828273 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 20:30:28.828283 | orchestrator | Monday 02 June 2025 20:30:26 +0000 (0:00:00.069) 0:00:13.136 *********** 2025-06-02 20:30:28.828294 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:28.828305 | orchestrator | 2025-06-02 20:30:28.828315 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:30:28.828325 | orchestrator | Monday 02 June 2025 20:30:28 +0000 (0:00:01.858) 0:00:14.994 *********** 2025-06-02 20:30:28.828342 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 20:30:28.828353 | orchestrator |  "msg": [ 2025-06-02 20:30:28.828364 | orchestrator |  "Validator run completed.", 2025-06-02 20:30:28.828381 | orchestrator |  "You can find the report file here:", 2025-06-02 20:30:28.828418 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-02T20:30:14+00:00-report.json", 2025-06-02 20:30:28.828432 | orchestrator |  "on the following host:", 2025-06-02 20:30:28.828442 | orchestrator |  "testbed-manager" 2025-06-02 20:30:28.828454 | orchestrator |  ] 2025-06-02 20:30:28.828465 | orchestrator | } 2025-06-02 20:30:28.828476 | orchestrator | 2025-06-02 20:30:28.828487 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:30:28.828499 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 20:30:28.828510 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:30:28.828529 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:30:29.137958 | orchestrator | 2025-06-02 20:30:29.138095 | orchestrator | 2025-06-02 20:30:29.138107 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:30:29.138119 | orchestrator | Monday 02 June 2025 20:30:28 +0000 (0:00:00.409) 0:00:15.403 *********** 2025-06-02 20:30:29.138127 | orchestrator | =============================================================================== 2025-06-02 20:30:29.138136 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.03s 2025-06-02 20:30:29.138146 | orchestrator | Write report file ------------------------------------------------------- 1.86s 2025-06-02 20:30:29.138155 | orchestrator | Aggregate test results step one ----------------------------------------- 1.42s 2025-06-02 20:30:29.138163 | orchestrator | Get container info ------------------------------------------------------ 1.06s 2025-06-02 20:30:29.138172 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2025-06-02 20:30:29.138181 | orchestrator | Aggregate test results step one ----------------------------------------- 0.80s 2025-06-02 20:30:29.138189 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-06-02 20:30:29.138198 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2025-06-02 20:30:29.138206 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.47s 2025-06-02 20:30:29.138215 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-06-02 20:30:29.138223 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.31s 2025-06-02 20:30:29.138232 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-06-02 20:30:29.138240 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2025-06-02 20:30:29.138249 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2025-06-02 20:30:29.138258 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-06-02 20:30:29.138266 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-06-02 20:30:29.138275 | orchestrator | Print report file information ------------------------------------------- 0.28s 2025-06-02 20:30:29.138283 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2025-06-02 20:30:29.138292 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2025-06-02 20:30:29.138302 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-06-02 20:30:29.400088 | orchestrator | + osism validate ceph-osds 2025-06-02 20:30:31.159025 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:30:31.159123 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:30:31.159136 | orchestrator | Registering Redlock._release_script 2025-06-02 20:30:39.694526 | orchestrator | 2025-06-02 20:30:39.694664 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-02 20:30:39.694725 | orchestrator | 2025-06-02 20:30:39.694746 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 20:30:39.694763 | orchestrator | Monday 02 June 2025 20:30:35 +0000 (0:00:00.418) 0:00:00.418 *********** 2025-06-02 20:30:39.694783 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:39.694802 | orchestrator | 2025-06-02 20:30:39.694821 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 20:30:39.694840 | orchestrator | Monday 02 June 2025 20:30:36 +0000 (0:00:00.630) 0:00:01.048 *********** 2025-06-02 20:30:39.694858 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:39.694876 | orchestrator | 2025-06-02 20:30:39.694895 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 20:30:39.694913 | orchestrator | Monday 02 June 2025 20:30:36 +0000 (0:00:00.383) 0:00:01.432 *********** 2025-06-02 20:30:39.694929 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:30:39.694940 | orchestrator | 2025-06-02 20:30:39.694952 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 20:30:39.694965 | orchestrator | Monday 02 June 2025 20:30:37 +0000 (0:00:00.915) 0:00:02.347 *********** 2025-06-02 20:30:39.694979 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:39.694993 | orchestrator | 2025-06-02 20:30:39.695006 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 20:30:39.695019 | orchestrator | Monday 02 June 2025 20:30:37 +0000 (0:00:00.124) 0:00:02.472 *********** 2025-06-02 20:30:39.695031 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:39.695043 | orchestrator | 2025-06-02 20:30:39.695056 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 20:30:39.695069 | orchestrator | Monday 02 June 2025 20:30:37 +0000 (0:00:00.131) 0:00:02.604 *********** 2025-06-02 20:30:39.695081 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:39.695094 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:30:39.695106 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:30:39.695117 | orchestrator | 2025-06-02 20:30:39.695131 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 20:30:39.695143 | orchestrator | Monday 02 June 2025 20:30:37 +0000 (0:00:00.303) 0:00:02.907 *********** 2025-06-02 20:30:39.695155 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:39.695167 | orchestrator | 2025-06-02 20:30:39.695179 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 20:30:39.695191 | orchestrator | Monday 02 June 2025 20:30:38 +0000 (0:00:00.157) 0:00:03.065 *********** 2025-06-02 20:30:39.695203 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:39.695215 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:30:39.695227 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:30:39.695240 | orchestrator | 2025-06-02 20:30:39.695252 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-02 20:30:39.695263 | orchestrator | Monday 02 June 2025 20:30:38 +0000 (0:00:00.318) 0:00:03.384 *********** 2025-06-02 20:30:39.695277 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:39.695289 | orchestrator | 2025-06-02 20:30:39.695306 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:30:39.695324 | orchestrator | Monday 02 June 2025 20:30:38 +0000 (0:00:00.546) 0:00:03.930 *********** 2025-06-02 20:30:39.695343 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:39.695356 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:30:39.695367 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:30:39.695377 | orchestrator | 2025-06-02 20:30:39.695572 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-02 20:30:39.695602 | orchestrator | Monday 02 June 2025 20:30:39 +0000 (0:00:00.522) 0:00:04.452 *********** 2025-06-02 20:30:39.695626 | orchestrator | skipping: [testbed-node-3] => (item={'id': '797f38d720d828f41fa8a3ac1994998466a4fd1990d3449bb6d85305f4e56150', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 20:30:39.695668 | orchestrator | skipping: [testbed-node-3] => (item={'id': '76614df5f07d5cdb3ddd9d2fae2be2c1c19716314ba45ac8703cb73b1873940c', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 20:30:39.695692 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3a3363827c159109f027c2f2fb606743631fc1b298f0a55504055a344c5024c4', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 20:30:39.695713 | orchestrator | skipping: [testbed-node-3] => (item={'id': '44c086b06508ed398b88ab8e21bb117a3002e2d06a18a5c116b8e981231725c1', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-02 20:30:39.695739 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'da7b25a873391fc06478264edaee0e3abe3f9d483ab7b15a113f01b38932f736', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 20:30:39.695774 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b73f80a207d81c541f6e92238f7704973c8f1aa2437e6cd3313280f3171908df', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 20:30:39.695787 | orchestrator | skipping: [testbed-node-3] => (item={'id': '311680978f237ade423e8a086eac0f5c1f6a6d22bdc8f425812de2ca0c34a0d4', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 20:30:39.695798 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1390290373fe86c26b8d3a36733d9ef221dac814971a1fae5b156df7b1a2e711', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 20:30:39.695821 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cf47117a2c89da31c9b6aea500ffeab02784cd93765253ad19e41d1a756c25ea', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 20:30:39.695838 | orchestrator | skipping: [testbed-node-3] => (item={'id': '920421aa1050aee385dc9e234d562bd31b297f8749f7b8367fb1527ad7aa5be6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-02 20:30:39.695856 | orchestrator | skipping: [testbed-node-3] => (item={'id': '375cc6a7c5e75dc4fb6a4b58412990798f68af4bd282aeaa7837fe2543680924', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 20:30:39.695875 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4e517085035a9d763d8e5572ebe7999bc9bfa277d9dbe7da7df7998593eeff8c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 20:30:39.695893 | orchestrator | ok: [testbed-node-3] => (item={'id': '798186a24d83731f8ab40dd87b620d37e802fd38fdb8930df35fe280a811e10f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 20:30:39.695912 | orchestrator | ok: [testbed-node-3] => (item={'id': '47ff414249209e58b4682ea0d72ca4c741b9eced0da5a9e47c6eb8e9dd88641f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 20:30:39.695930 | orchestrator | skipping: [testbed-node-3] => (item={'id': '63de34802abc01b327df8c7d25027a8b53257f76c6f96634723becc859e357f6', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-02 20:30:39.695962 | orchestrator | skipping: [testbed-node-3] => (item={'id': '71815031ad95c9a74c9aa2f50e1ab64996e8f673b9862ecb3e4dedae55115b2a', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 20:30:39.695978 | orchestrator | skipping: [testbed-node-3] => (item={'id': '48c23512a2f0dd89d3d0ef1e68a1b8f942807b133bad931af602cdc9a612e470', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-02 20:30:39.695994 | orchestrator | skipping: [testbed-node-3] => (item={'id': '58dec18ae0b927dcc29dc355dbe9a371041d9694743d58f41f930522a0af9ea4', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 20:30:39.696011 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1e5f58efdee0f1ece1be00d37c61641a76c6e7fb99c03036986bdad0dcd2e63b', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 20:30:39.696027 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e5e7a11bdc13ed92642fc7fc033c610a74a5988a9a8d50c60698aad455050f4a', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 20:30:39.696044 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f6e3a4a07d5f7ac0b9d0a48341979a88c8eea24e8e7fac9915f79bcf520922a5', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 20:30:39.696075 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b96c71580c1cb8899822d3149a078f7805712533cc1ef742ed6d314a6d50b5d2', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 20:30:39.849561 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'abbe63e2da6b160973e4c5ae665c4d3abcada5afa8c35aa8d11da031da501e64', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 20:30:39.849666 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6dfcaa65f21ea29930ca9ee6073e73c01ac89c271c1a8fa0bb28492d52ec784e', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-02 20:30:39.849680 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd6555835458babb70395a177015f63e18f9988f9bfdd6784c41f107d4f958e37', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 20:30:39.849707 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1f29e2edaea94a0ed85b511e8a282eef4280de090370caf271045e6b24d64316', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 20:30:39.849717 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c5506576e804ec3469f86de4c6a90af8267dcae0be0c5b61072bdc3b224a9665', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 20:30:39.849727 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8bd5c8d15e7b576d2b9b1e94b81d7bd8457eb6515925a14a83f8ec69a18332b6', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 20:30:39.849737 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bcca8b7c3bc1737b4bfa4f1d30f8f133d924257bfec02c679e6c683763a91cff', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 20:30:39.849768 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ba0a35b43ba38e8ee02479dc2d5a102c44eb68b3d868944ed80ac02bb05af7b1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 20:30:39.849777 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6b8f5cd551522a0e10ce50d93b607fe09661220d85c55fdaa104bb70f56e812f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 20:30:39.849787 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ae46ad59bbf6b8b81bc0f14056e515cfdab97c4261d70ffce7e4361176840ece', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 20:30:39.849795 | orchestrator | ok: [testbed-node-4] => (item={'id': '5d996531f736ae9b085e4a00189143a574a05cb45913c92d8dd36422199dd58d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 20:30:39.849805 | orchestrator | ok: [testbed-node-4] => (item={'id': '019e7479db1a2c48c572f608460caec709b8fcab8e3570590abaef1efc3554df', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 20:30:39.849815 | orchestrator | skipping: [testbed-node-4] => (item={'id': '918ae9915d237d6023233a0e37e001cd787444b4ef75bde5973b5ad7a37da8f2', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-02 20:30:39.849824 | orchestrator | skipping: [testbed-node-4] => (item={'id': '19b2e8d6438467bb765e341d0b32739965a09f79ba720cae5aa6464ec7ca1a85', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 20:30:39.849833 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f2d81ea2da7eabc602e3dbb520b00187dce45b0c8ff88ea314a61f434e85dd25', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-02 20:30:39.849861 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9ef1efcad630e0088a4ec1e902d9da0160cb9746393e83d34ec2fb084309085d', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 20:30:39.849870 | orchestrator | skipping: [testbed-node-4] => (item={'id': '02cb920f6d3067c5d4df1b315081a2545ad61fca34390ee13641fdbafcba5065', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 20:30:39.849878 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cce54616fd9296879a8c413211c7503356c9260679feeadc8537762344ad97b6', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 20:30:39.849885 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1f4fe1bc8d34956998a5327d8f857b5996ed3ec110985fb390322cd879acc4eb', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 20:30:39.849898 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f58dceea7f38ffb8b96f69eab48f1bb301152cf948d3eff47a151a059fc6d0c9', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 20:30:39.849908 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bd8f0eb6821dd53710aabaf3009efdae9684b00ae45671eb4ee37af0aa6a030f', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 20:30:39.849924 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9e3f7dcf5f9ee50dbe3df7deb429c601f207e27e5f3a2d267baa4c8403e21ebe', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-02 20:30:39.849933 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5f0cfc04f3dabdc58840d08c0013e6f1af627fb3a1a826aff6d89fdfe5e7c42c', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 20:30:39.849942 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a1480db728f423a5d12bd7a928212684e677be1729b1ceaa217676826b1b569f', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 20:30:39.849952 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ade52d4f0939ea4985c05d27e41016b2d98131dd57b19a250f8a9c921b1a0972', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 20:30:39.849958 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9395ec09cc1d62b65e7d7ad3e63c0d8deaac98445ccae0cfc4ee4dc37ed76fb2', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 20:30:39.849964 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7b8e07a40890d8ea1c2308458cd899f1d3aac71971e695a72cb261a174dd5b96', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 20:30:39.849969 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0c6b7a01112606d0664c837d6cc3c038d4b82601bc2c2d8f772d5dab6b2b8b8e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-02 20:30:39.849975 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e16fc1476feb0da769f749482bb15d0b063df20320cd8d91e05df95148d8e576', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 20:30:39.849981 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dacc4999d7c2b3beba3011b7dd45da90b63c3825ff4337d55bf1af97eae6ee08', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 20:30:39.849992 | orchestrator | ok: [testbed-node-5] => (item={'id': '4e7a5cdb269c42d7c601a88716f4faafd92df0b549ad2f52cdddc2e51eb981c6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 20:30:48.240865 | orchestrator | ok: [testbed-node-5] => (item={'id': '37adb3ac7e2b184830d5334e35964fdbbe6594391d314f227816aad4c744a73a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 20:30:48.240992 | orchestrator | skipping: [testbed-node-5] => (item={'id': '67a6cac74348c687a2b6aeebd6b7b62ab3cf1e9be62b8bbb54e75efc0e657982', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-02 20:30:48.241019 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c3c1d23bdc7e1f5165fec5e02dc538dd349ed4064638253aec3fb32742491e37', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 20:30:48.241038 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f80f9b91ffea3752db5760a29ea279774960176c79df3571675ff2b28e7f4e01', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-02 20:30:48.241075 | orchestrator | skipping: [testbed-node-5] => (item={'id': '581beb5ea4ca07934d35a14b91ccb8122ee94aba9655be1740607f2e5f5640dc', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 20:30:48.241103 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ed352cd3b38298db5395a6a1f09951acc18244d4f3acfff5d926fcc7255ff19b', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 20:30:48.241114 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e2e85f045a6299d03ee463cea00847bb6c36a72fb33ebb27fd305ba4b1a3efe7', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 20:30:48.241125 | orchestrator | 2025-06-02 20:30:48.241136 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-02 20:30:48.241147 | orchestrator | Monday 02 June 2025 20:30:39 +0000 (0:00:00.479) 0:00:04.932 *********** 2025-06-02 20:30:48.241157 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.241168 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:30:48.241178 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:30:48.241192 | orchestrator | 2025-06-02 20:30:48.241207 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-02 20:30:48.241222 | orchestrator | Monday 02 June 2025 20:30:40 +0000 (0:00:00.295) 0:00:05.227 *********** 2025-06-02 20:30:48.241238 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:48.241254 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:30:48.241271 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:30:48.241285 | orchestrator | 2025-06-02 20:30:48.241303 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-02 20:30:48.241321 | orchestrator | Monday 02 June 2025 20:30:40 +0000 (0:00:00.475) 0:00:05.703 *********** 2025-06-02 20:30:48.241339 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.241349 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:30:48.241359 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:30:48.241368 | orchestrator | 2025-06-02 20:30:48.241462 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:30:48.241475 | orchestrator | Monday 02 June 2025 20:30:40 +0000 (0:00:00.288) 0:00:05.992 *********** 2025-06-02 20:30:48.241487 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.241498 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:30:48.241509 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:30:48.241520 | orchestrator | 2025-06-02 20:30:48.241532 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-02 20:30:48.241543 | orchestrator | Monday 02 June 2025 20:30:41 +0000 (0:00:00.304) 0:00:06.296 *********** 2025-06-02 20:30:48.241559 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-02 20:30:48.241577 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-02 20:30:48.241593 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:48.241609 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-02 20:30:48.241625 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-02 20:30:48.241641 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:30:48.241657 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-02 20:30:48.241674 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-02 20:30:48.241690 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:30:48.241709 | orchestrator | 2025-06-02 20:30:48.241725 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-02 20:30:48.241755 | orchestrator | Monday 02 June 2025 20:30:41 +0000 (0:00:00.316) 0:00:06.613 *********** 2025-06-02 20:30:48.241766 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.241776 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:30:48.241785 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:30:48.241794 | orchestrator | 2025-06-02 20:30:48.241822 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 20:30:48.241833 | orchestrator | Monday 02 June 2025 20:30:42 +0000 (0:00:00.472) 0:00:07.085 *********** 2025-06-02 20:30:48.241842 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:48.241852 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:30:48.241862 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:30:48.241871 | orchestrator | 2025-06-02 20:30:48.241880 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 20:30:48.241890 | orchestrator | Monday 02 June 2025 20:30:42 +0000 (0:00:00.285) 0:00:07.371 *********** 2025-06-02 20:30:48.241900 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:48.241910 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:30:48.241919 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:30:48.241928 | orchestrator | 2025-06-02 20:30:48.241938 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-02 20:30:48.241948 | orchestrator | Monday 02 June 2025 20:30:42 +0000 (0:00:00.275) 0:00:07.646 *********** 2025-06-02 20:30:48.241959 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.241976 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:30:48.241991 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:30:48.242007 | orchestrator | 2025-06-02 20:30:48.242090 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:30:48.242102 | orchestrator | Monday 02 June 2025 20:30:42 +0000 (0:00:00.306) 0:00:07.953 *********** 2025-06-02 20:30:48.242112 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:48.242121 | orchestrator | 2025-06-02 20:30:48.242131 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:30:48.242148 | orchestrator | Monday 02 June 2025 20:30:43 +0000 (0:00:00.683) 0:00:08.636 *********** 2025-06-02 20:30:48.242158 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:48.242168 | orchestrator | 2025-06-02 20:30:48.242177 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:30:48.242187 | orchestrator | Monday 02 June 2025 20:30:43 +0000 (0:00:00.248) 0:00:08.885 *********** 2025-06-02 20:30:48.242196 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:48.242205 | orchestrator | 2025-06-02 20:30:48.242215 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:48.242224 | orchestrator | Monday 02 June 2025 20:30:44 +0000 (0:00:00.243) 0:00:09.128 *********** 2025-06-02 20:30:48.242234 | orchestrator | 2025-06-02 20:30:48.242243 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:48.242253 | orchestrator | Monday 02 June 2025 20:30:44 +0000 (0:00:00.068) 0:00:09.197 *********** 2025-06-02 20:30:48.242262 | orchestrator | 2025-06-02 20:30:48.242272 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:30:48.242281 | orchestrator | Monday 02 June 2025 20:30:44 +0000 (0:00:00.068) 0:00:09.265 *********** 2025-06-02 20:30:48.242291 | orchestrator | 2025-06-02 20:30:48.242300 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:30:48.242310 | orchestrator | Monday 02 June 2025 20:30:44 +0000 (0:00:00.069) 0:00:09.335 *********** 2025-06-02 20:30:48.242319 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:48.242329 | orchestrator | 2025-06-02 20:30:48.242338 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-02 20:30:48.242348 | orchestrator | Monday 02 June 2025 20:30:44 +0000 (0:00:00.244) 0:00:09.579 *********** 2025-06-02 20:30:48.242357 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:48.242367 | orchestrator | 2025-06-02 20:30:48.242401 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:30:48.242425 | orchestrator | Monday 02 June 2025 20:30:44 +0000 (0:00:00.237) 0:00:09.816 *********** 2025-06-02 20:30:48.242435 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.242445 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:30:48.242455 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:30:48.242464 | orchestrator | 2025-06-02 20:30:48.242474 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-02 20:30:48.242483 | orchestrator | Monday 02 June 2025 20:30:45 +0000 (0:00:00.294) 0:00:10.111 *********** 2025-06-02 20:30:48.242493 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.242503 | orchestrator | 2025-06-02 20:30:48.242512 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-02 20:30:48.242522 | orchestrator | Monday 02 June 2025 20:30:45 +0000 (0:00:00.602) 0:00:10.714 *********** 2025-06-02 20:30:48.242531 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:30:48.242541 | orchestrator | 2025-06-02 20:30:48.242551 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-02 20:30:48.242560 | orchestrator | Monday 02 June 2025 20:30:47 +0000 (0:00:01.555) 0:00:12.269 *********** 2025-06-02 20:30:48.242570 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.242579 | orchestrator | 2025-06-02 20:30:48.242589 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-02 20:30:48.242599 | orchestrator | Monday 02 June 2025 20:30:47 +0000 (0:00:00.127) 0:00:12.396 *********** 2025-06-02 20:30:48.242608 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.242618 | orchestrator | 2025-06-02 20:30:48.242627 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-02 20:30:48.242637 | orchestrator | Monday 02 June 2025 20:30:47 +0000 (0:00:00.319) 0:00:12.716 *********** 2025-06-02 20:30:48.242646 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:30:48.242656 | orchestrator | 2025-06-02 20:30:48.242666 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-02 20:30:48.242675 | orchestrator | Monday 02 June 2025 20:30:47 +0000 (0:00:00.115) 0:00:12.831 *********** 2025-06-02 20:30:48.242685 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.242694 | orchestrator | 2025-06-02 20:30:48.242704 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:30:48.242713 | orchestrator | Monday 02 June 2025 20:30:47 +0000 (0:00:00.130) 0:00:12.961 *********** 2025-06-02 20:30:48.242723 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:30:48.242732 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:30:48.242742 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:30:48.242751 | orchestrator | 2025-06-02 20:30:48.242761 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-02 20:30:48.242779 | orchestrator | Monday 02 June 2025 20:30:48 +0000 (0:00:00.284) 0:00:13.246 *********** 2025-06-02 20:31:00.501966 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:31:00.502085 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:31:00.502092 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:31:00.502097 | orchestrator | 2025-06-02 20:31:00.502102 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-02 20:31:00.502113 | orchestrator | Monday 02 June 2025 20:30:50 +0000 (0:00:02.632) 0:00:15.878 *********** 2025-06-02 20:31:00.502117 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:31:00.502122 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:31:00.502126 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:31:00.502130 | orchestrator | 2025-06-02 20:31:00.502134 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-02 20:31:00.502138 | orchestrator | Monday 02 June 2025 20:30:51 +0000 (0:00:00.301) 0:00:16.180 *********** 2025-06-02 20:31:00.502142 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:31:00.502146 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:31:00.502150 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:31:00.502154 | orchestrator | 2025-06-02 20:31:00.502158 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-02 20:31:00.502176 | orchestrator | Monday 02 June 2025 20:30:51 +0000 (0:00:00.521) 0:00:16.701 *********** 2025-06-02 20:31:00.502180 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:31:00.502184 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:31:00.502188 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:31:00.502192 | orchestrator | 2025-06-02 20:31:00.502196 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-02 20:31:00.502210 | orchestrator | Monday 02 June 2025 20:30:51 +0000 (0:00:00.284) 0:00:16.986 *********** 2025-06-02 20:31:00.502214 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:31:00.502217 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:31:00.502221 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:31:00.502225 | orchestrator | 2025-06-02 20:31:00.502229 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-02 20:31:00.502233 | orchestrator | Monday 02 June 2025 20:30:52 +0000 (0:00:00.474) 0:00:17.460 *********** 2025-06-02 20:31:00.502237 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:31:00.502240 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:31:00.502244 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:31:00.502248 | orchestrator | 2025-06-02 20:31:00.502252 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-02 20:31:00.502256 | orchestrator | Monday 02 June 2025 20:30:52 +0000 (0:00:00.300) 0:00:17.761 *********** 2025-06-02 20:31:00.502259 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:31:00.502263 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:31:00.502267 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:31:00.502271 | orchestrator | 2025-06-02 20:31:00.502275 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:31:00.502278 | orchestrator | Monday 02 June 2025 20:30:53 +0000 (0:00:00.299) 0:00:18.061 *********** 2025-06-02 20:31:00.502282 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:31:00.502286 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:31:00.502290 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:31:00.502294 | orchestrator | 2025-06-02 20:31:00.502298 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-02 20:31:00.502301 | orchestrator | Monday 02 June 2025 20:30:53 +0000 (0:00:00.469) 0:00:18.530 *********** 2025-06-02 20:31:00.502305 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:31:00.502309 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:31:00.502313 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:31:00.502317 | orchestrator | 2025-06-02 20:31:00.502320 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-02 20:31:00.502324 | orchestrator | Monday 02 June 2025 20:30:54 +0000 (0:00:00.731) 0:00:19.262 *********** 2025-06-02 20:31:00.502328 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:31:00.502332 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:31:00.502336 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:31:00.502340 | orchestrator | 2025-06-02 20:31:00.502344 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-02 20:31:00.502348 | orchestrator | Monday 02 June 2025 20:30:54 +0000 (0:00:00.308) 0:00:19.570 *********** 2025-06-02 20:31:00.502352 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:31:00.502356 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:31:00.502360 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:31:00.502404 | orchestrator | 2025-06-02 20:31:00.502409 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-02 20:31:00.502412 | orchestrator | Monday 02 June 2025 20:30:54 +0000 (0:00:00.306) 0:00:19.877 *********** 2025-06-02 20:31:00.502416 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:31:00.502420 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:31:00.502424 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:31:00.502427 | orchestrator | 2025-06-02 20:31:00.502431 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 20:31:00.502435 | orchestrator | Monday 02 June 2025 20:30:55 +0000 (0:00:00.330) 0:00:20.208 *********** 2025-06-02 20:31:00.502443 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:31:00.502448 | orchestrator | 2025-06-02 20:31:00.502451 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 20:31:00.502455 | orchestrator | Monday 02 June 2025 20:30:55 +0000 (0:00:00.674) 0:00:20.882 *********** 2025-06-02 20:31:00.502459 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:31:00.502463 | orchestrator | 2025-06-02 20:31:00.502467 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:31:00.502470 | orchestrator | Monday 02 June 2025 20:30:56 +0000 (0:00:00.249) 0:00:21.132 *********** 2025-06-02 20:31:00.502474 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:31:00.502478 | orchestrator | 2025-06-02 20:31:00.502482 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:31:00.502486 | orchestrator | Monday 02 June 2025 20:30:57 +0000 (0:00:01.581) 0:00:22.713 *********** 2025-06-02 20:31:00.502490 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:31:00.502493 | orchestrator | 2025-06-02 20:31:00.502497 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:31:00.502501 | orchestrator | Monday 02 June 2025 20:30:57 +0000 (0:00:00.247) 0:00:22.961 *********** 2025-06-02 20:31:00.502515 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:31:00.502519 | orchestrator | 2025-06-02 20:31:00.502523 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:31:00.502526 | orchestrator | Monday 02 June 2025 20:30:58 +0000 (0:00:00.253) 0:00:23.215 *********** 2025-06-02 20:31:00.502530 | orchestrator | 2025-06-02 20:31:00.502534 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:31:00.502538 | orchestrator | Monday 02 June 2025 20:30:58 +0000 (0:00:00.072) 0:00:23.287 *********** 2025-06-02 20:31:00.502542 | orchestrator | 2025-06-02 20:31:00.502546 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:31:00.502550 | orchestrator | Monday 02 June 2025 20:30:58 +0000 (0:00:00.068) 0:00:23.356 *********** 2025-06-02 20:31:00.502553 | orchestrator | 2025-06-02 20:31:00.502557 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 20:31:00.502561 | orchestrator | Monday 02 June 2025 20:30:58 +0000 (0:00:00.068) 0:00:23.425 *********** 2025-06-02 20:31:00.502565 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:31:00.502569 | orchestrator | 2025-06-02 20:31:00.502572 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:31:00.502576 | orchestrator | Monday 02 June 2025 20:30:59 +0000 (0:00:01.204) 0:00:24.629 *********** 2025-06-02 20:31:00.502580 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-02 20:31:00.502585 | orchestrator |  "msg": [ 2025-06-02 20:31:00.502589 | orchestrator |  "Validator run completed.", 2025-06-02 20:31:00.502593 | orchestrator |  "You can find the report file here:", 2025-06-02 20:31:00.502597 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-02T20:30:35+00:00-report.json", 2025-06-02 20:31:00.502602 | orchestrator |  "on the following host:", 2025-06-02 20:31:00.502606 | orchestrator |  "testbed-manager" 2025-06-02 20:31:00.502610 | orchestrator |  ] 2025-06-02 20:31:00.502615 | orchestrator | } 2025-06-02 20:31:00.502621 | orchestrator | 2025-06-02 20:31:00.502627 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:31:00.502634 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-02 20:31:00.502642 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 20:31:00.502648 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 20:31:00.502658 | orchestrator | 2025-06-02 20:31:00.502664 | orchestrator | 2025-06-02 20:31:00.502671 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:31:00.502677 | orchestrator | Monday 02 June 2025 20:31:00 +0000 (0:00:00.551) 0:00:25.181 *********** 2025-06-02 20:31:00.502682 | orchestrator | =============================================================================== 2025-06-02 20:31:00.502689 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.63s 2025-06-02 20:31:00.502695 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2025-06-02 20:31:00.502702 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.56s 2025-06-02 20:31:00.502709 | orchestrator | Write report file ------------------------------------------------------- 1.20s 2025-06-02 20:31:00.502716 | orchestrator | Create report output directory ------------------------------------------ 0.92s 2025-06-02 20:31:00.502724 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.73s 2025-06-02 20:31:00.502732 | orchestrator | Aggregate test results step one ----------------------------------------- 0.68s 2025-06-02 20:31:00.502740 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.67s 2025-06-02 20:31:00.502746 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-06-02 20:31:00.502751 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.60s 2025-06-02 20:31:00.502757 | orchestrator | Print report file information ------------------------------------------- 0.55s 2025-06-02 20:31:00.502763 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.55s 2025-06-02 20:31:00.502768 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2025-06-02 20:31:00.502774 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.52s 2025-06-02 20:31:00.502780 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.48s 2025-06-02 20:31:00.502786 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.48s 2025-06-02 20:31:00.502791 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.47s 2025-06-02 20:31:00.502796 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.47s 2025-06-02 20:31:00.502802 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-06-02 20:31:00.502808 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.38s 2025-06-02 20:31:00.756905 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-02 20:31:00.765548 | orchestrator | + set -e 2025-06-02 20:31:00.765636 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 20:31:00.765650 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 20:31:00.765661 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 20:31:00.765671 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 20:31:00.765680 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 20:31:00.765690 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 20:31:00.765701 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 20:31:00.765711 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 20:31:00.765721 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 20:31:00.765730 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 20:31:00.765740 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 20:31:00.765749 | orchestrator | ++ export ARA=false 2025-06-02 20:31:00.765759 | orchestrator | ++ ARA=false 2025-06-02 20:31:00.765769 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 20:31:00.765778 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 20:31:00.765788 | orchestrator | ++ export TEMPEST=false 2025-06-02 20:31:00.765798 | orchestrator | ++ TEMPEST=false 2025-06-02 20:31:00.766347 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 20:31:00.766479 | orchestrator | ++ IS_ZUUL=true 2025-06-02 20:31:00.766493 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 20:31:00.766503 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-06-02 20:31:00.766513 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 20:31:00.766522 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 20:31:00.766556 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 20:31:00.766566 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 20:31:00.766576 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 20:31:00.766585 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 20:31:00.766595 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 20:31:00.766604 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 20:31:00.766614 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 20:31:00.766624 | orchestrator | + source /etc/os-release 2025-06-02 20:31:00.766634 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-02 20:31:00.766643 | orchestrator | ++ NAME=Ubuntu 2025-06-02 20:31:00.766653 | orchestrator | ++ VERSION_ID=24.04 2025-06-02 20:31:00.766662 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-02 20:31:00.766672 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-02 20:31:00.766681 | orchestrator | ++ ID=ubuntu 2025-06-02 20:31:00.766692 | orchestrator | ++ ID_LIKE=debian 2025-06-02 20:31:00.766708 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-02 20:31:00.766724 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-02 20:31:00.766741 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-02 20:31:00.766759 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-02 20:31:00.766778 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-02 20:31:00.766812 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-02 20:31:00.766839 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-02 20:31:00.766865 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-02 20:31:00.766885 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 20:31:00.797292 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 20:31:22.916085 | orchestrator | 2025-06-02 20:31:22.916175 | orchestrator | # Status of Elasticsearch 2025-06-02 20:31:22.916185 | orchestrator | 2025-06-02 20:31:22.916192 | orchestrator | + pushd /opt/configuration/contrib 2025-06-02 20:31:22.916200 | orchestrator | + echo 2025-06-02 20:31:22.916207 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-02 20:31:22.916213 | orchestrator | + echo 2025-06-02 20:31:22.916220 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-02 20:31:23.084577 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-02 20:31:23.084663 | orchestrator | 2025-06-02 20:31:23.084674 | orchestrator | # Status of MariaDB 2025-06-02 20:31:23.084683 | orchestrator | 2025-06-02 20:31:23.084690 | orchestrator | + echo 2025-06-02 20:31:23.084698 | orchestrator | + echo '# Status of MariaDB' 2025-06-02 20:31:23.084706 | orchestrator | + echo 2025-06-02 20:31:23.084713 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-02 20:31:23.084722 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-02 20:31:23.140455 | orchestrator | Reading package lists... 2025-06-02 20:31:23.429206 | orchestrator | Building dependency tree... 2025-06-02 20:31:23.429328 | orchestrator | Reading state information... 2025-06-02 20:31:23.814758 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-02 20:31:23.814903 | orchestrator | bc set to manually installed. 2025-06-02 20:31:23.814919 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-02 20:31:24.440315 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-02 20:31:24.440856 | orchestrator | 2025-06-02 20:31:24.440877 | orchestrator | # Status of Prometheus 2025-06-02 20:31:24.440883 | orchestrator | 2025-06-02 20:31:24.440887 | orchestrator | + echo 2025-06-02 20:31:24.440891 | orchestrator | + echo '# Status of Prometheus' 2025-06-02 20:31:24.440895 | orchestrator | + echo 2025-06-02 20:31:24.440900 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-02 20:31:24.492019 | orchestrator | Unauthorized 2025-06-02 20:31:24.494272 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-02 20:31:24.556124 | orchestrator | Unauthorized 2025-06-02 20:31:24.558131 | orchestrator | 2025-06-02 20:31:24.558195 | orchestrator | # Status of RabbitMQ 2025-06-02 20:31:24.558210 | orchestrator | 2025-06-02 20:31:24.558222 | orchestrator | + echo 2025-06-02 20:31:24.558233 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-02 20:31:24.558275 | orchestrator | + echo 2025-06-02 20:31:24.558288 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-02 20:31:24.970530 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-02 20:31:24.978557 | orchestrator | 2025-06-02 20:31:24.978635 | orchestrator | # Status of Redis 2025-06-02 20:31:24.978643 | orchestrator | 2025-06-02 20:31:24.978648 | orchestrator | + echo 2025-06-02 20:31:24.978654 | orchestrator | + echo '# Status of Redis' 2025-06-02 20:31:24.978661 | orchestrator | + echo 2025-06-02 20:31:24.978668 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-02 20:31:24.983047 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001534s;;;0.000000;10.000000 2025-06-02 20:31:24.983111 | orchestrator | + popd 2025-06-02 20:31:24.983384 | orchestrator | 2025-06-02 20:31:24.983401 | orchestrator | # Create backup of MariaDB database 2025-06-02 20:31:24.983409 | orchestrator | 2025-06-02 20:31:24.983414 | orchestrator | + echo 2025-06-02 20:31:24.983420 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-02 20:31:24.983426 | orchestrator | + echo 2025-06-02 20:31:24.983432 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-02 20:31:26.679710 | orchestrator | 2025-06-02 20:31:26 | INFO  | Task 1f222996-59ee-493e-add0-c27267343880 (mariadb_backup) was prepared for execution. 2025-06-02 20:31:26.680787 | orchestrator | 2025-06-02 20:31:26 | INFO  | It takes a moment until task 1f222996-59ee-493e-add0-c27267343880 (mariadb_backup) has been started and output is visible here. 2025-06-02 20:31:30.526100 | orchestrator | 2025-06-02 20:31:30.526514 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:31:30.526990 | orchestrator | 2025-06-02 20:31:30.530607 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:31:30.531504 | orchestrator | Monday 02 June 2025 20:31:30 +0000 (0:00:00.182) 0:00:00.182 *********** 2025-06-02 20:31:30.712323 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:31:30.826767 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:31:30.827782 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:31:30.834948 | orchestrator | 2025-06-02 20:31:30.835039 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:31:30.835056 | orchestrator | Monday 02 June 2025 20:31:30 +0000 (0:00:00.304) 0:00:00.487 *********** 2025-06-02 20:31:31.408122 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 20:31:31.409236 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 20:31:31.410137 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 20:31:31.412036 | orchestrator | 2025-06-02 20:31:31.412778 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 20:31:31.413575 | orchestrator | 2025-06-02 20:31:31.414560 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 20:31:31.415487 | orchestrator | Monday 02 June 2025 20:31:31 +0000 (0:00:00.581) 0:00:01.069 *********** 2025-06-02 20:31:31.819793 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:31:31.825621 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 20:31:31.825753 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 20:31:31.827647 | orchestrator | 2025-06-02 20:31:31.828780 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 20:31:31.829458 | orchestrator | Monday 02 June 2025 20:31:31 +0000 (0:00:00.409) 0:00:01.478 *********** 2025-06-02 20:31:32.328887 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:31:32.329412 | orchestrator | 2025-06-02 20:31:32.332291 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-02 20:31:32.333257 | orchestrator | Monday 02 June 2025 20:31:32 +0000 (0:00:00.508) 0:00:01.987 *********** 2025-06-02 20:31:35.489681 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:31:35.492493 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:31:35.493937 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:31:35.495463 | orchestrator | 2025-06-02 20:31:35.495809 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-02 20:31:35.496119 | orchestrator | Monday 02 June 2025 20:31:35 +0000 (0:00:03.154) 0:00:05.141 *********** 2025-06-02 20:33:14.092464 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 20:33:14.092547 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-02 20:33:14.092555 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 20:33:14.092561 | orchestrator | mariadb_bootstrap_restart 2025-06-02 20:33:14.178087 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:33:14.178719 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:33:14.179258 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:33:14.179993 | orchestrator | 2025-06-02 20:33:14.180802 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 20:33:14.183895 | orchestrator | skipping: no hosts matched 2025-06-02 20:33:14.187689 | orchestrator | 2025-06-02 20:33:14.187732 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 20:33:14.187738 | orchestrator | skipping: no hosts matched 2025-06-02 20:33:14.187744 | orchestrator | 2025-06-02 20:33:14.188603 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 20:33:14.188906 | orchestrator | skipping: no hosts matched 2025-06-02 20:33:14.189956 | orchestrator | 2025-06-02 20:33:14.190589 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 20:33:14.191037 | orchestrator | 2025-06-02 20:33:14.192610 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 20:33:14.192652 | orchestrator | Monday 02 June 2025 20:33:14 +0000 (0:01:38.697) 0:01:43.839 *********** 2025-06-02 20:33:14.365757 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:33:14.477207 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:33:14.478952 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:33:14.480674 | orchestrator | 2025-06-02 20:33:14.484962 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 20:33:14.486137 | orchestrator | Monday 02 June 2025 20:33:14 +0000 (0:00:00.298) 0:01:44.138 *********** 2025-06-02 20:33:14.827592 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:33:14.872346 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:33:14.874433 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:33:14.876163 | orchestrator | 2025-06-02 20:33:14.877701 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:33:14.878894 | orchestrator | 2025-06-02 20:33:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 20:33:14.879709 | orchestrator | 2025-06-02 20:33:14 | INFO  | Please wait and do not abort execution. 2025-06-02 20:33:14.881079 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:33:14.881933 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:33:14.882942 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:33:14.884110 | orchestrator | 2025-06-02 20:33:14.885172 | orchestrator | 2025-06-02 20:33:14.888496 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:33:14.888611 | orchestrator | Monday 02 June 2025 20:33:14 +0000 (0:00:00.394) 0:01:44.532 *********** 2025-06-02 20:33:14.889210 | orchestrator | =============================================================================== 2025-06-02 20:33:14.890086 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 98.70s 2025-06-02 20:33:14.890661 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.15s 2025-06-02 20:33:14.891135 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-06-02 20:33:14.891675 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.51s 2025-06-02 20:33:14.892270 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2025-06-02 20:33:14.892787 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.39s 2025-06-02 20:33:14.893162 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-06-02 20:33:14.893637 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-06-02 20:33:15.395011 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-02 20:33:15.402683 | orchestrator | + set -e 2025-06-02 20:33:15.402778 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 20:33:15.402794 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 20:33:15.402806 | orchestrator | ++ INTERACTIVE=false 2025-06-02 20:33:15.402819 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 20:33:15.402830 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 20:33:15.402841 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 20:33:15.402865 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 20:33:15.410090 | orchestrator | 2025-06-02 20:33:15.410173 | orchestrator | # OpenStack endpoints 2025-06-02 20:33:15.410185 | orchestrator | 2025-06-02 20:33:15.410196 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 20:33:15.410206 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 20:33:15.410216 | orchestrator | + export OS_CLOUD=admin 2025-06-02 20:33:15.410226 | orchestrator | + OS_CLOUD=admin 2025-06-02 20:33:15.410236 | orchestrator | + echo 2025-06-02 20:33:15.410245 | orchestrator | + echo '# OpenStack endpoints' 2025-06-02 20:33:15.410255 | orchestrator | + echo 2025-06-02 20:33:15.410265 | orchestrator | + openstack endpoint list 2025-06-02 20:33:18.968443 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 20:33:18.968554 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-02 20:33:18.968569 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 20:33:18.968581 | orchestrator | | 04fa632230644334af894f5e5904c22c | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-02 20:33:18.968591 | orchestrator | | 06b621fe894347fca11221431c7022d7 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-02 20:33:18.968602 | orchestrator | | 0ee554576c0540fab4841647270dd8e5 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-02 20:33:18.968613 | orchestrator | | 2215f2da5a5941da959eafd6d38b4f5c | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-02 20:33:18.968623 | orchestrator | | 3d640015c4c84d97b356a5691179f717 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-02 20:33:18.968634 | orchestrator | | 4b0df9913ebd418a95c741fab54b3587 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-02 20:33:18.968644 | orchestrator | | 5e54534cb87f42e194d1beb88635594c | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-02 20:33:18.968655 | orchestrator | | 6cd4ab6b8b014699987c0d10a0cdfb3d | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-02 20:33:18.968687 | orchestrator | | 88836405005442bcbc894e308098b16e | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-02 20:33:18.968698 | orchestrator | | 9149aa79439c48c9be5e9decbdc01c52 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-02 20:33:18.968708 | orchestrator | | 93f27dd75f1b433889d65669433432fb | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-02 20:33:18.968719 | orchestrator | | 94cc4aadf528445ea8c7ea2e7b4170af | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-02 20:33:18.968730 | orchestrator | | ba4e04101e184fedbc0d86d2a7b0b654 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-02 20:33:18.968740 | orchestrator | | ca544fd3e5b84916acb9b0382c332cc4 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-02 20:33:18.968751 | orchestrator | | cd33d5e666934351b3bd167fff268ff1 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-02 20:33:18.968761 | orchestrator | | d6c41c7105974c5caaa634a809869c57 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-02 20:33:18.968772 | orchestrator | | d93fe647c5ed4f8cbda42c8fa0af9b65 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-02 20:33:18.968782 | orchestrator | | dbe6947446d54d57a25bb42a4ff09765 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-02 20:33:18.968793 | orchestrator | | dc1c2da442c9413c99168469ddff9a46 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-02 20:33:18.968803 | orchestrator | | e66f38ca933045bcac78f84ac1e0fcc4 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-02 20:33:18.968832 | orchestrator | | ee0cdb4c8cf54e91a8382b16ac2e1f9f | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-02 20:33:18.968844 | orchestrator | | f5f19e2e33bb428aa0ab215f24262a5d | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-02 20:33:18.968854 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 20:33:19.234444 | orchestrator | 2025-06-02 20:33:19.234537 | orchestrator | # Cinder 2025-06-02 20:33:19.234549 | orchestrator | 2025-06-02 20:33:19.234559 | orchestrator | + echo 2025-06-02 20:33:19.234568 | orchestrator | + echo '# Cinder' 2025-06-02 20:33:19.234577 | orchestrator | + echo 2025-06-02 20:33:19.234586 | orchestrator | + openstack volume service list 2025-06-02 20:33:22.411870 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 20:33:22.412020 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-02 20:33:22.412047 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 20:33:22.412068 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-02T20:33:20.000000 | 2025-06-02 20:33:22.412082 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-02T20:33:12.000000 | 2025-06-02 20:33:22.412119 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-02T20:33:15.000000 | 2025-06-02 20:33:22.412131 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-02T20:33:13.000000 | 2025-06-02 20:33:22.412142 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-02T20:33:14.000000 | 2025-06-02 20:33:22.412152 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-02T20:33:13.000000 | 2025-06-02 20:33:22.412163 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-02T20:33:13.000000 | 2025-06-02 20:33:22.412174 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-02T20:33:14.000000 | 2025-06-02 20:33:22.412185 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-02T20:33:14.000000 | 2025-06-02 20:33:22.412196 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 20:33:22.685028 | orchestrator | 2025-06-02 20:33:22.685114 | orchestrator | # Neutron 2025-06-02 20:33:22.685124 | orchestrator | 2025-06-02 20:33:22.685131 | orchestrator | + echo 2025-06-02 20:33:22.685138 | orchestrator | + echo '# Neutron' 2025-06-02 20:33:22.685145 | orchestrator | + echo 2025-06-02 20:33:22.685151 | orchestrator | + openstack network agent list 2025-06-02 20:33:25.558909 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 20:33:25.559036 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-02 20:33:25.559059 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 20:33:25.559078 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-02 20:33:25.559097 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-02 20:33:25.559150 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-02 20:33:25.559170 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-02 20:33:25.559187 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-02 20:33:25.559206 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-02 20:33:25.559223 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 20:33:25.559238 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 20:33:25.559254 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 20:33:25.559313 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 20:33:25.803444 | orchestrator | + openstack network service provider list 2025-06-02 20:33:28.358262 | orchestrator | +---------------+------+---------+ 2025-06-02 20:33:28.358429 | orchestrator | | Service Type | Name | Default | 2025-06-02 20:33:28.358466 | orchestrator | +---------------+------+---------+ 2025-06-02 20:33:28.358497 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-02 20:33:28.358506 | orchestrator | +---------------+------+---------+ 2025-06-02 20:33:28.614588 | orchestrator | 2025-06-02 20:33:28.614672 | orchestrator | # Nova 2025-06-02 20:33:28.614682 | orchestrator | 2025-06-02 20:33:28.614690 | orchestrator | + echo 2025-06-02 20:33:28.614698 | orchestrator | + echo '# Nova' 2025-06-02 20:33:28.614705 | orchestrator | + echo 2025-06-02 20:33:28.614713 | orchestrator | + openstack compute service list 2025-06-02 20:33:31.442892 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 20:33:31.443839 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-02 20:33:31.443884 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 20:33:31.443897 | orchestrator | | d0ffe696-3506-4247-bc57-79d254875b9d | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-02T20:33:29.000000 | 2025-06-02 20:33:31.443908 | orchestrator | | e0f84cd2-8363-4bcf-a082-e4db8a4ff8fa | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-02T20:33:29.000000 | 2025-06-02 20:33:31.443919 | orchestrator | | 319f458e-f630-4025-9fa4-1dcccbf1785d | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-02T20:33:21.000000 | 2025-06-02 20:33:31.443930 | orchestrator | | a6513c68-5300-4bfe-9454-e6fc9a5de5a9 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-02T20:33:25.000000 | 2025-06-02 20:33:31.443940 | orchestrator | | 42a78bea-9039-476d-9c79-8f72c3222bc5 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-02T20:33:28.000000 | 2025-06-02 20:33:31.443951 | orchestrator | | 4f4177a6-0869-4b94-b8f7-f19c205888f6 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-02T20:33:22.000000 | 2025-06-02 20:33:31.443990 | orchestrator | | e2a41ae1-6da6-422a-a5a9-548f9f28c67a | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-02T20:33:22.000000 | 2025-06-02 20:33:31.444002 | orchestrator | | 41b7c7d2-cb2f-4eb0-a76b-fc85f592882f | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-02T20:33:22.000000 | 2025-06-02 20:33:31.444012 | orchestrator | | 9fe0a6d6-b2d6-4cc7-b8ad-3f9608bff062 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-02T20:33:23.000000 | 2025-06-02 20:33:31.444023 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 20:33:31.703458 | orchestrator | + openstack hypervisor list 2025-06-02 20:33:36.051818 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 20:33:36.051929 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-02 20:33:36.051944 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 20:33:36.051955 | orchestrator | | 9a4eafd3-2ff5-4ebc-a64e-26da9215394d | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-02 20:33:36.051966 | orchestrator | | 46b5688b-6731-46af-8ad4-3065a2b941da | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-02 20:33:36.051977 | orchestrator | | fc5d0462-1e0f-42b5-962d-209c12be1db2 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-02 20:33:36.051988 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 20:33:36.301951 | orchestrator | 2025-06-02 20:33:36.302112 | orchestrator | # Run OpenStack test play 2025-06-02 20:33:36.302130 | orchestrator | 2025-06-02 20:33:36.302142 | orchestrator | + echo 2025-06-02 20:33:36.302154 | orchestrator | + echo '# Run OpenStack test play' 2025-06-02 20:33:36.302166 | orchestrator | + echo 2025-06-02 20:33:36.302177 | orchestrator | + osism apply --environment openstack test 2025-06-02 20:33:37.945808 | orchestrator | 2025-06-02 20:33:37 | INFO  | Trying to run play test in environment openstack 2025-06-02 20:33:37.950441 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:33:37.950523 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:33:37.950557 | orchestrator | Registering Redlock._release_script 2025-06-02 20:33:38.009694 | orchestrator | 2025-06-02 20:33:38 | INFO  | Task 1446419d-aee6-44ce-a054-4a815bf9e0fd (test) was prepared for execution. 2025-06-02 20:33:38.009810 | orchestrator | 2025-06-02 20:33:38 | INFO  | It takes a moment until task 1446419d-aee6-44ce-a054-4a815bf9e0fd (test) has been started and output is visible here. 2025-06-02 20:33:41.922194 | orchestrator | 2025-06-02 20:33:41.922984 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-02 20:33:41.926003 | orchestrator | 2025-06-02 20:33:41.926761 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-02 20:33:41.927630 | orchestrator | Monday 02 June 2025 20:33:41 +0000 (0:00:00.078) 0:00:00.078 *********** 2025-06-02 20:33:45.451611 | orchestrator | changed: [localhost] 2025-06-02 20:33:45.454383 | orchestrator | 2025-06-02 20:33:45.454467 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-02 20:33:45.454480 | orchestrator | Monday 02 June 2025 20:33:45 +0000 (0:00:03.530) 0:00:03.609 *********** 2025-06-02 20:33:49.553613 | orchestrator | changed: [localhost] 2025-06-02 20:33:49.554394 | orchestrator | 2025-06-02 20:33:49.555279 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-02 20:33:49.555841 | orchestrator | Monday 02 June 2025 20:33:49 +0000 (0:00:04.099) 0:00:07.708 *********** 2025-06-02 20:33:55.537790 | orchestrator | changed: [localhost] 2025-06-02 20:33:55.539390 | orchestrator | 2025-06-02 20:33:55.540694 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-02 20:33:55.541346 | orchestrator | Monday 02 June 2025 20:33:55 +0000 (0:00:05.985) 0:00:13.694 *********** 2025-06-02 20:33:59.466707 | orchestrator | changed: [localhost] 2025-06-02 20:33:59.466818 | orchestrator | 2025-06-02 20:33:59.467664 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-02 20:33:59.468076 | orchestrator | Monday 02 June 2025 20:33:59 +0000 (0:00:03.927) 0:00:17.622 *********** 2025-06-02 20:34:03.485103 | orchestrator | changed: [localhost] 2025-06-02 20:34:03.485177 | orchestrator | 2025-06-02 20:34:03.485184 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-02 20:34:03.485190 | orchestrator | Monday 02 June 2025 20:34:03 +0000 (0:00:04.018) 0:00:21.640 *********** 2025-06-02 20:34:15.367325 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-02 20:34:15.367425 | orchestrator | changed: [localhost] => (item=member) 2025-06-02 20:34:15.367517 | orchestrator | changed: [localhost] => (item=creator) 2025-06-02 20:34:15.367538 | orchestrator | 2025-06-02 20:34:15.367546 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-02 20:34:15.367555 | orchestrator | Monday 02 June 2025 20:34:15 +0000 (0:00:11.878) 0:00:33.518 *********** 2025-06-02 20:34:19.593182 | orchestrator | changed: [localhost] 2025-06-02 20:34:19.593963 | orchestrator | 2025-06-02 20:34:19.595824 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-02 20:34:19.596586 | orchestrator | Monday 02 June 2025 20:34:19 +0000 (0:00:04.226) 0:00:37.745 *********** 2025-06-02 20:34:24.484531 | orchestrator | changed: [localhost] 2025-06-02 20:34:24.484754 | orchestrator | 2025-06-02 20:34:24.485873 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-02 20:34:24.487084 | orchestrator | Monday 02 June 2025 20:34:24 +0000 (0:00:04.896) 0:00:42.641 *********** 2025-06-02 20:34:28.579620 | orchestrator | changed: [localhost] 2025-06-02 20:34:28.581128 | orchestrator | 2025-06-02 20:34:28.581162 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-02 20:34:28.582091 | orchestrator | Monday 02 June 2025 20:34:28 +0000 (0:00:04.093) 0:00:46.735 *********** 2025-06-02 20:34:32.563714 | orchestrator | changed: [localhost] 2025-06-02 20:34:32.563870 | orchestrator | 2025-06-02 20:34:32.565596 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-02 20:34:32.565781 | orchestrator | Monday 02 June 2025 20:34:32 +0000 (0:00:03.984) 0:00:50.720 *********** 2025-06-02 20:34:36.690841 | orchestrator | changed: [localhost] 2025-06-02 20:34:36.690942 | orchestrator | 2025-06-02 20:34:36.690958 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-02 20:34:36.690976 | orchestrator | Monday 02 June 2025 20:34:36 +0000 (0:00:04.125) 0:00:54.845 *********** 2025-06-02 20:34:40.585492 | orchestrator | changed: [localhost] 2025-06-02 20:34:40.585601 | orchestrator | 2025-06-02 20:34:40.586693 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-02 20:34:40.586817 | orchestrator | Monday 02 June 2025 20:34:40 +0000 (0:00:03.896) 0:00:58.742 *********** 2025-06-02 20:34:56.923060 | orchestrator | changed: [localhost] 2025-06-02 20:34:56.923274 | orchestrator | 2025-06-02 20:34:56.923692 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-02 20:34:56.923724 | orchestrator | Monday 02 June 2025 20:34:56 +0000 (0:00:16.333) 0:01:15.076 *********** 2025-06-02 20:37:11.785697 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 20:37:11.785824 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 20:37:11.785846 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 20:37:11.786516 | orchestrator | 2025-06-02 20:37:11.786809 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 20:37:41.790649 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 20:37:41.790762 | orchestrator | 2025-06-02 20:37:41.790778 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 20:38:11.788947 | orchestrator | 2025-06-02 20:38:11.789103 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 20:38:12.709290 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 20:38:12.710267 | orchestrator | 2025-06-02 20:38:12.710901 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-02 20:38:12.713142 | orchestrator | Monday 02 June 2025 20:38:12 +0000 (0:03:15.792) 0:04:30.868 *********** 2025-06-02 20:38:36.062586 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 20:38:36.062701 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 20:38:36.063373 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 20:38:36.065153 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 20:38:36.065713 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 20:38:36.066417 | orchestrator | 2025-06-02 20:38:36.066747 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-02 20:38:36.067491 | orchestrator | Monday 02 June 2025 20:38:36 +0000 (0:00:23.349) 0:04:54.218 *********** 2025-06-02 20:39:08.023862 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 20:39:08.023956 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 20:39:08.024222 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 20:39:08.024617 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 20:39:08.026314 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 20:39:08.026429 | orchestrator | 2025-06-02 20:39:08.026496 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-02 20:39:08.026827 | orchestrator | Monday 02 June 2025 20:39:08 +0000 (0:00:31.960) 0:05:26.178 *********** 2025-06-02 20:39:14.847442 | orchestrator | changed: [localhost] 2025-06-02 20:39:14.848190 | orchestrator | 2025-06-02 20:39:14.849477 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-02 20:39:14.850646 | orchestrator | Monday 02 June 2025 20:39:14 +0000 (0:00:06.823) 0:05:33.002 *********** 2025-06-02 20:39:28.305624 | orchestrator | changed: [localhost] 2025-06-02 20:39:28.305692 | orchestrator | 2025-06-02 20:39:28.305699 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-02 20:39:28.305705 | orchestrator | Monday 02 June 2025 20:39:28 +0000 (0:00:13.454) 0:05:46.457 *********** 2025-06-02 20:39:33.419740 | orchestrator | ok: [localhost] 2025-06-02 20:39:33.419817 | orchestrator | 2025-06-02 20:39:33.421607 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-02 20:39:33.422235 | orchestrator | Monday 02 June 2025 20:39:33 +0000 (0:00:05.119) 0:05:51.576 *********** 2025-06-02 20:39:33.461689 | orchestrator | ok: [localhost] => { 2025-06-02 20:39:33.462552 | orchestrator |  "msg": "192.168.112.192" 2025-06-02 20:39:33.463494 | orchestrator | } 2025-06-02 20:39:33.464076 | orchestrator | 2025-06-02 20:39:33.465332 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:39:33.465754 | orchestrator | 2025-06-02 20:39:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 20:39:33.466041 | orchestrator | 2025-06-02 20:39:33 | INFO  | Please wait and do not abort execution. 2025-06-02 20:39:33.466973 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:39:33.467844 | orchestrator | 2025-06-02 20:39:33.468782 | orchestrator | 2025-06-02 20:39:33.469589 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:39:33.470282 | orchestrator | Monday 02 June 2025 20:39:33 +0000 (0:00:00.041) 0:05:51.618 *********** 2025-06-02 20:39:33.470846 | orchestrator | =============================================================================== 2025-06-02 20:39:33.471432 | orchestrator | Create test instances ------------------------------------------------- 195.79s 2025-06-02 20:39:33.472117 | orchestrator | Add tag to instances --------------------------------------------------- 31.96s 2025-06-02 20:39:33.472865 | orchestrator | Add metadata to instances ---------------------------------------------- 23.35s 2025-06-02 20:39:33.473958 | orchestrator | Create test network topology ------------------------------------------- 16.33s 2025-06-02 20:39:33.474567 | orchestrator | Attach test volume ----------------------------------------------------- 13.45s 2025-06-02 20:39:33.475521 | orchestrator | Add member roles to user test ------------------------------------------ 11.88s 2025-06-02 20:39:33.476324 | orchestrator | Create test volume ------------------------------------------------------ 6.82s 2025-06-02 20:39:33.477011 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.99s 2025-06-02 20:39:33.477977 | orchestrator | Create floating ip address ---------------------------------------------- 5.12s 2025-06-02 20:39:33.478700 | orchestrator | Create ssh security group ----------------------------------------------- 4.90s 2025-06-02 20:39:33.479625 | orchestrator | Create test server group ------------------------------------------------ 4.23s 2025-06-02 20:39:33.480166 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.13s 2025-06-02 20:39:33.480748 | orchestrator | Create test-admin user -------------------------------------------------- 4.10s 2025-06-02 20:39:33.481443 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.09s 2025-06-02 20:39:33.481956 | orchestrator | Create test user -------------------------------------------------------- 4.02s 2025-06-02 20:39:33.482682 | orchestrator | Create icmp security group ---------------------------------------------- 3.98s 2025-06-02 20:39:33.483192 | orchestrator | Create test project ----------------------------------------------------- 3.93s 2025-06-02 20:39:33.483731 | orchestrator | Create test keypair ----------------------------------------------------- 3.90s 2025-06-02 20:39:33.484494 | orchestrator | Create test domain ------------------------------------------------------ 3.53s 2025-06-02 20:39:33.485556 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-06-02 20:39:33.935360 | orchestrator | + server_list 2025-06-02 20:39:33.935426 | orchestrator | + openstack --os-cloud test server list 2025-06-02 20:39:37.770744 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 20:39:37.770881 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-02 20:39:37.770898 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 20:39:37.770936 | orchestrator | | 23c9e52c-28dc-4f12-af68-2c9727a87be6 | test-4 | ACTIVE | auto_allocated_network=10.42.0.24, 192.168.112.185 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 20:39:37.770948 | orchestrator | | 55d281de-70e7-4581-8dc5-4264fc198225 | test-3 | ACTIVE | auto_allocated_network=10.42.0.46, 192.168.112.147 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 20:39:37.770959 | orchestrator | | c9e2f9fc-a1e4-4c90-a752-61e4469ea113 | test-2 | ACTIVE | auto_allocated_network=10.42.0.61, 192.168.112.133 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 20:39:37.770970 | orchestrator | | 4a95e195-d28c-466d-b590-4162bfc7f942 | test-1 | ACTIVE | auto_allocated_network=10.42.0.38, 192.168.112.169 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 20:39:37.770980 | orchestrator | | f6bb06f0-2809-4245-af9b-f980eb79d018 | test | ACTIVE | auto_allocated_network=10.42.0.42, 192.168.112.192 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 20:39:37.770991 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 20:39:38.015473 | orchestrator | + openstack --os-cloud test server show test 2025-06-02 20:39:41.283563 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:41.283707 | orchestrator | | Field | Value | 2025-06-02 20:39:41.283725 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:41.283737 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 20:39:41.283748 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 20:39:41.283760 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 20:39:41.283771 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-02 20:39:41.283781 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 20:39:41.283811 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 20:39:41.283822 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 20:39:41.283834 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 20:39:41.283861 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 20:39:41.283873 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 20:39:41.283884 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 20:39:41.283902 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 20:39:41.283914 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 20:39:41.283925 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 20:39:41.283935 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 20:39:41.283946 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T20:35:28.000000 | 2025-06-02 20:39:41.283964 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 20:39:41.283975 | orchestrator | | accessIPv4 | | 2025-06-02 20:39:41.283985 | orchestrator | | accessIPv6 | | 2025-06-02 20:39:41.283996 | orchestrator | | addresses | auto_allocated_network=10.42.0.42, 192.168.112.192 | 2025-06-02 20:39:41.284014 | orchestrator | | config_drive | | 2025-06-02 20:39:41.284029 | orchestrator | | created | 2025-06-02T20:35:05Z | 2025-06-02 20:39:41.284041 | orchestrator | | description | None | 2025-06-02 20:39:41.284052 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 20:39:41.284063 | orchestrator | | hostId | 1adc1a4726029b1366413208c78424465aa5528f38d492e88e69a697 | 2025-06-02 20:39:41.284076 | orchestrator | | host_status | None | 2025-06-02 20:39:41.284094 | orchestrator | | id | f6bb06f0-2809-4245-af9b-f980eb79d018 | 2025-06-02 20:39:41.284126 | orchestrator | | image | Cirros 0.6.2 (402a0b74-479b-4d34-8466-cfce983dfde8) | 2025-06-02 20:39:41.284148 | orchestrator | | key_name | test | 2025-06-02 20:39:41.284169 | orchestrator | | locked | False | 2025-06-02 20:39:41.284189 | orchestrator | | locked_reason | None | 2025-06-02 20:39:41.284211 | orchestrator | | name | test | 2025-06-02 20:39:41.284274 | orchestrator | | pinned_availability_zone | None | 2025-06-02 20:39:41.284297 | orchestrator | | progress | 0 | 2025-06-02 20:39:41.284315 | orchestrator | | project_id | ad1ed8a7e43e4a23aec98b6138f243e4 | 2025-06-02 20:39:41.284326 | orchestrator | | properties | hostname='test' | 2025-06-02 20:39:41.284337 | orchestrator | | security_groups | name='icmp' | 2025-06-02 20:39:41.284348 | orchestrator | | | name='ssh' | 2025-06-02 20:39:41.284366 | orchestrator | | server_groups | None | 2025-06-02 20:39:41.284378 | orchestrator | | status | ACTIVE | 2025-06-02 20:39:41.284389 | orchestrator | | tags | test | 2025-06-02 20:39:41.284400 | orchestrator | | trusted_image_certificates | None | 2025-06-02 20:39:41.284410 | orchestrator | | updated | 2025-06-02T20:38:17Z | 2025-06-02 20:39:41.284427 | orchestrator | | user_id | ed63204328b043a089df58c57a170b11 | 2025-06-02 20:39:41.284443 | orchestrator | | volumes_attached | delete_on_termination='False', id='435010df-e96f-4450-8786-a0e18bf2020a' | 2025-06-02 20:39:41.287086 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:41.507061 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-02 20:39:44.744826 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:44.744934 | orchestrator | | Field | Value | 2025-06-02 20:39:44.744976 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:44.744988 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 20:39:44.745000 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 20:39:44.745011 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 20:39:44.745022 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-02 20:39:44.745050 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 20:39:44.745062 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 20:39:44.745073 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 20:39:44.745110 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 20:39:44.745139 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 20:39:44.745152 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 20:39:44.745171 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 20:39:44.745182 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 20:39:44.745193 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 20:39:44.745204 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 20:39:44.745215 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 20:39:44.745225 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T20:36:11.000000 | 2025-06-02 20:39:44.745236 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 20:39:44.745247 | orchestrator | | accessIPv4 | | 2025-06-02 20:39:44.745294 | orchestrator | | accessIPv6 | | 2025-06-02 20:39:44.745306 | orchestrator | | addresses | auto_allocated_network=10.42.0.38, 192.168.112.169 | 2025-06-02 20:39:44.745324 | orchestrator | | config_drive | | 2025-06-02 20:39:44.745352 | orchestrator | | created | 2025-06-02T20:35:50Z | 2025-06-02 20:39:44.745366 | orchestrator | | description | None | 2025-06-02 20:39:44.745379 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 20:39:44.745392 | orchestrator | | hostId | 57a8fed88f5e6adccedb6437689e5945adce1bd6ebdc6407dd6ddfe2 | 2025-06-02 20:39:44.745405 | orchestrator | | host_status | None | 2025-06-02 20:39:44.745417 | orchestrator | | id | 4a95e195-d28c-466d-b590-4162bfc7f942 | 2025-06-02 20:39:44.745429 | orchestrator | | image | Cirros 0.6.2 (402a0b74-479b-4d34-8466-cfce983dfde8) | 2025-06-02 20:39:44.745442 | orchestrator | | key_name | test | 2025-06-02 20:39:44.745454 | orchestrator | | locked | False | 2025-06-02 20:39:44.745471 | orchestrator | | locked_reason | None | 2025-06-02 20:39:44.745485 | orchestrator | | name | test-1 | 2025-06-02 20:39:44.745511 | orchestrator | | pinned_availability_zone | None | 2025-06-02 20:39:44.745524 | orchestrator | | progress | 0 | 2025-06-02 20:39:44.745537 | orchestrator | | project_id | ad1ed8a7e43e4a23aec98b6138f243e4 | 2025-06-02 20:39:44.745549 | orchestrator | | properties | hostname='test-1' | 2025-06-02 20:39:44.745562 | orchestrator | | security_groups | name='icmp' | 2025-06-02 20:39:44.745574 | orchestrator | | | name='ssh' | 2025-06-02 20:39:44.745586 | orchestrator | | server_groups | None | 2025-06-02 20:39:44.745599 | orchestrator | | status | ACTIVE | 2025-06-02 20:39:44.745612 | orchestrator | | tags | test | 2025-06-02 20:39:44.745624 | orchestrator | | trusted_image_certificates | None | 2025-06-02 20:39:44.745649 | orchestrator | | updated | 2025-06-02T20:38:22Z | 2025-06-02 20:39:44.745668 | orchestrator | | user_id | ed63204328b043a089df58c57a170b11 | 2025-06-02 20:39:44.745681 | orchestrator | | volumes_attached | | 2025-06-02 20:39:44.748425 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:44.981435 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-02 20:39:48.136345 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:48.136431 | orchestrator | | Field | Value | 2025-06-02 20:39:48.136442 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:48.136449 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 20:39:48.136456 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 20:39:48.136462 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 20:39:48.136468 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-02 20:39:48.136510 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 20:39:48.136518 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 20:39:48.136524 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 20:39:48.136530 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 20:39:48.136548 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 20:39:48.136555 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 20:39:48.136562 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 20:39:48.136568 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 20:39:48.136574 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 20:39:48.136581 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 20:39:48.136587 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 20:39:48.136599 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T20:36:50.000000 | 2025-06-02 20:39:48.136609 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 20:39:48.136615 | orchestrator | | accessIPv4 | | 2025-06-02 20:39:48.136621 | orchestrator | | accessIPv6 | | 2025-06-02 20:39:48.136628 | orchestrator | | addresses | auto_allocated_network=10.42.0.61, 192.168.112.133 | 2025-06-02 20:39:48.136638 | orchestrator | | config_drive | | 2025-06-02 20:39:48.136644 | orchestrator | | created | 2025-06-02T20:36:29Z | 2025-06-02 20:39:48.136651 | orchestrator | | description | None | 2025-06-02 20:39:48.136657 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 20:39:48.136663 | orchestrator | | hostId | 4ab7cc68cbabacde050e97017983f1f7be1273552a636baa7683ba59 | 2025-06-02 20:39:48.136669 | orchestrator | | host_status | None | 2025-06-02 20:39:48.136699 | orchestrator | | id | c9e2f9fc-a1e4-4c90-a752-61e4469ea113 | 2025-06-02 20:39:48.136706 | orchestrator | | image | Cirros 0.6.2 (402a0b74-479b-4d34-8466-cfce983dfde8) | 2025-06-02 20:39:48.136712 | orchestrator | | key_name | test | 2025-06-02 20:39:48.136719 | orchestrator | | locked | False | 2025-06-02 20:39:48.136725 | orchestrator | | locked_reason | None | 2025-06-02 20:39:48.136732 | orchestrator | | name | test-2 | 2025-06-02 20:39:48.136741 | orchestrator | | pinned_availability_zone | None | 2025-06-02 20:39:48.136748 | orchestrator | | progress | 0 | 2025-06-02 20:39:48.136754 | orchestrator | | project_id | ad1ed8a7e43e4a23aec98b6138f243e4 | 2025-06-02 20:39:48.136760 | orchestrator | | properties | hostname='test-2' | 2025-06-02 20:39:48.136771 | orchestrator | | security_groups | name='icmp' | 2025-06-02 20:39:48.136777 | orchestrator | | | name='ssh' | 2025-06-02 20:39:48.136783 | orchestrator | | server_groups | None | 2025-06-02 20:39:48.136798 | orchestrator | | status | ACTIVE | 2025-06-02 20:39:48.136805 | orchestrator | | tags | test | 2025-06-02 20:39:48.136811 | orchestrator | | trusted_image_certificates | None | 2025-06-02 20:39:48.136817 | orchestrator | | updated | 2025-06-02T20:38:26Z | 2025-06-02 20:39:48.136827 | orchestrator | | user_id | ed63204328b043a089df58c57a170b11 | 2025-06-02 20:39:48.136833 | orchestrator | | volumes_attached | | 2025-06-02 20:39:48.141129 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:48.382997 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-02 20:39:51.377178 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:51.377432 | orchestrator | | Field | Value | 2025-06-02 20:39:51.377456 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:51.377468 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 20:39:51.377479 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 20:39:51.377504 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 20:39:51.377516 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-02 20:39:51.377527 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 20:39:51.377538 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 20:39:51.377548 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 20:39:51.377559 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 20:39:51.377589 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 20:39:51.377609 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 20:39:51.377620 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 20:39:51.377631 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 20:39:51.377642 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 20:39:51.377658 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 20:39:51.377669 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 20:39:51.377680 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T20:37:22.000000 | 2025-06-02 20:39:51.377691 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 20:39:51.377701 | orchestrator | | accessIPv4 | | 2025-06-02 20:39:51.377712 | orchestrator | | accessIPv6 | | 2025-06-02 20:39:51.377730 | orchestrator | | addresses | auto_allocated_network=10.42.0.46, 192.168.112.147 | 2025-06-02 20:39:51.377747 | orchestrator | | config_drive | | 2025-06-02 20:39:51.377759 | orchestrator | | created | 2025-06-02T20:37:06Z | 2025-06-02 20:39:51.377770 | orchestrator | | description | None | 2025-06-02 20:39:51.377780 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 20:39:51.377791 | orchestrator | | hostId | 57a8fed88f5e6adccedb6437689e5945adce1bd6ebdc6407dd6ddfe2 | 2025-06-02 20:39:51.377807 | orchestrator | | host_status | None | 2025-06-02 20:39:51.377818 | orchestrator | | id | 55d281de-70e7-4581-8dc5-4264fc198225 | 2025-06-02 20:39:51.377829 | orchestrator | | image | Cirros 0.6.2 (402a0b74-479b-4d34-8466-cfce983dfde8) | 2025-06-02 20:39:51.377840 | orchestrator | | key_name | test | 2025-06-02 20:39:51.377850 | orchestrator | | locked | False | 2025-06-02 20:39:51.377875 | orchestrator | | locked_reason | None | 2025-06-02 20:39:51.377886 | orchestrator | | name | test-3 | 2025-06-02 20:39:51.377903 | orchestrator | | pinned_availability_zone | None | 2025-06-02 20:39:51.377915 | orchestrator | | progress | 0 | 2025-06-02 20:39:51.377925 | orchestrator | | project_id | ad1ed8a7e43e4a23aec98b6138f243e4 | 2025-06-02 20:39:51.377936 | orchestrator | | properties | hostname='test-3' | 2025-06-02 20:39:51.377947 | orchestrator | | security_groups | name='icmp' | 2025-06-02 20:39:51.377963 | orchestrator | | | name='ssh' | 2025-06-02 20:39:51.377974 | orchestrator | | server_groups | None | 2025-06-02 20:39:51.377985 | orchestrator | | status | ACTIVE | 2025-06-02 20:39:51.377995 | orchestrator | | tags | test | 2025-06-02 20:39:51.378073 | orchestrator | | trusted_image_certificates | None | 2025-06-02 20:39:51.378100 | orchestrator | | updated | 2025-06-02T20:38:31Z | 2025-06-02 20:39:51.378128 | orchestrator | | user_id | ed63204328b043a089df58c57a170b11 | 2025-06-02 20:39:51.378151 | orchestrator | | volumes_attached | | 2025-06-02 20:39:51.382471 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:51.606448 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-02 20:39:54.673208 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:54.673358 | orchestrator | | Field | Value | 2025-06-02 20:39:54.673390 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:54.673401 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 20:39:54.673411 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 20:39:54.673421 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 20:39:54.673453 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-02 20:39:54.673464 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 20:39:54.673473 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 20:39:54.673483 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 20:39:54.673493 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 20:39:54.673520 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 20:39:54.673531 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 20:39:54.673541 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 20:39:54.673551 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 20:39:54.673560 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 20:39:54.673577 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 20:39:54.673587 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 20:39:54.673596 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T20:37:56.000000 | 2025-06-02 20:39:54.673613 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 20:39:54.673623 | orchestrator | | accessIPv4 | | 2025-06-02 20:39:54.673633 | orchestrator | | accessIPv6 | | 2025-06-02 20:39:54.673643 | orchestrator | | addresses | auto_allocated_network=10.42.0.24, 192.168.112.185 | 2025-06-02 20:39:54.673659 | orchestrator | | config_drive | | 2025-06-02 20:39:54.673669 | orchestrator | | created | 2025-06-02T20:37:39Z | 2025-06-02 20:39:54.673679 | orchestrator | | description | None | 2025-06-02 20:39:54.673693 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 20:39:54.673709 | orchestrator | | hostId | 4ab7cc68cbabacde050e97017983f1f7be1273552a636baa7683ba59 | 2025-06-02 20:39:54.673719 | orchestrator | | host_status | None | 2025-06-02 20:39:54.673729 | orchestrator | | id | 23c9e52c-28dc-4f12-af68-2c9727a87be6 | 2025-06-02 20:39:54.673739 | orchestrator | | image | Cirros 0.6.2 (402a0b74-479b-4d34-8466-cfce983dfde8) | 2025-06-02 20:39:54.673749 | orchestrator | | key_name | test | 2025-06-02 20:39:54.673759 | orchestrator | | locked | False | 2025-06-02 20:39:54.673768 | orchestrator | | locked_reason | None | 2025-06-02 20:39:54.673778 | orchestrator | | name | test-4 | 2025-06-02 20:39:54.673798 | orchestrator | | pinned_availability_zone | None | 2025-06-02 20:39:54.673816 | orchestrator | | progress | 0 | 2025-06-02 20:39:54.673839 | orchestrator | | project_id | ad1ed8a7e43e4a23aec98b6138f243e4 | 2025-06-02 20:39:54.673867 | orchestrator | | properties | hostname='test-4' | 2025-06-02 20:39:54.673880 | orchestrator | | security_groups | name='icmp' | 2025-06-02 20:39:54.673890 | orchestrator | | | name='ssh' | 2025-06-02 20:39:54.673900 | orchestrator | | server_groups | None | 2025-06-02 20:39:54.673910 | orchestrator | | status | ACTIVE | 2025-06-02 20:39:54.673920 | orchestrator | | tags | test | 2025-06-02 20:39:54.673929 | orchestrator | | trusted_image_certificates | None | 2025-06-02 20:39:54.673939 | orchestrator | | updated | 2025-06-02T20:38:35Z | 2025-06-02 20:39:54.673955 | orchestrator | | user_id | ed63204328b043a089df58c57a170b11 | 2025-06-02 20:39:54.673965 | orchestrator | | volumes_attached | | 2025-06-02 20:39:54.676755 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:39:54.907369 | orchestrator | + server_ping 2025-06-02 20:39:54.908930 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 20:39:54.909199 | orchestrator | ++ tr -d '\r' 2025-06-02 20:39:57.652066 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:39:57.652174 | orchestrator | + ping -c3 192.168.112.147 2025-06-02 20:39:57.666868 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-06-02 20:39:57.666979 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=8.68 ms 2025-06-02 20:39:58.663022 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.85 ms 2025-06-02 20:39:59.662987 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=1.81 ms 2025-06-02 20:39:59.663097 | orchestrator | 2025-06-02 20:39:59.663113 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-06-02 20:39:59.663128 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:39:59.663140 | orchestrator | rtt min/avg/max/mdev = 1.810/4.447/8.682/3.024 ms 2025-06-02 20:39:59.663440 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:39:59.663465 | orchestrator | + ping -c3 192.168.112.185 2025-06-02 20:39:59.677768 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2025-06-02 20:39:59.677870 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=9.98 ms 2025-06-02 20:40:00.671378 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.83 ms 2025-06-02 20:40:01.671858 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.09 ms 2025-06-02 20:40:01.672211 | orchestrator | 2025-06-02 20:40:01.672246 | orchestrator | --- 192.168.112.185 ping statistics --- 2025-06-02 20:40:01.672268 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 20:40:01.672280 | orchestrator | rtt min/avg/max/mdev = 2.091/4.965/9.975/3.555 ms 2025-06-02 20:40:01.672306 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:40:01.672366 | orchestrator | + ping -c3 192.168.112.192 2025-06-02 20:40:01.683959 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-06-02 20:40:01.684033 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=6.47 ms 2025-06-02 20:40:02.681673 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.31 ms 2025-06-02 20:40:03.683070 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=2.04 ms 2025-06-02 20:40:03.683427 | orchestrator | 2025-06-02 20:40:03.683467 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-06-02 20:40:03.683487 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:40:03.683499 | orchestrator | rtt min/avg/max/mdev = 2.043/3.607/6.470/2.027 ms 2025-06-02 20:40:03.683526 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:40:03.683538 | orchestrator | + ping -c3 192.168.112.169 2025-06-02 20:40:03.695086 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-06-02 20:40:03.695158 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=6.80 ms 2025-06-02 20:40:04.692526 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.49 ms 2025-06-02 20:40:05.694869 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.39 ms 2025-06-02 20:40:05.694971 | orchestrator | 2025-06-02 20:40:05.694985 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-06-02 20:40:05.694998 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:40:05.695009 | orchestrator | rtt min/avg/max/mdev = 2.391/3.892/6.798/2.054 ms 2025-06-02 20:40:05.695021 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:40:05.695032 | orchestrator | + ping -c3 192.168.112.133 2025-06-02 20:40:05.707219 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-06-02 20:40:05.707365 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=7.89 ms 2025-06-02 20:40:06.703030 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.41 ms 2025-06-02 20:40:07.704810 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.96 ms 2025-06-02 20:40:07.704911 | orchestrator | 2025-06-02 20:40:07.704936 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-06-02 20:40:07.704949 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:40:07.704959 | orchestrator | rtt min/avg/max/mdev = 1.960/4.086/7.889/2.695 ms 2025-06-02 20:40:07.705221 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 20:40:07.705243 | orchestrator | + compute_list 2025-06-02 20:40:07.705253 | orchestrator | + osism manage compute list testbed-node-3 2025-06-02 20:40:10.887821 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:40:10.887926 | orchestrator | | ID | Name | Status | 2025-06-02 20:40:10.887940 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 20:40:10.887952 | orchestrator | | 23c9e52c-28dc-4f12-af68-2c9727a87be6 | test-4 | ACTIVE | 2025-06-02 20:40:10.887963 | orchestrator | | c9e2f9fc-a1e4-4c90-a752-61e4469ea113 | test-2 | ACTIVE | 2025-06-02 20:40:10.887974 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:40:11.114004 | orchestrator | + osism manage compute list testbed-node-4 2025-06-02 20:40:14.339902 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:40:14.340015 | orchestrator | | ID | Name | Status | 2025-06-02 20:40:14.340032 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 20:40:14.340045 | orchestrator | | f6bb06f0-2809-4245-af9b-f980eb79d018 | test | ACTIVE | 2025-06-02 20:40:14.340056 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:40:14.569441 | orchestrator | + osism manage compute list testbed-node-5 2025-06-02 20:40:17.702877 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:40:17.702989 | orchestrator | | ID | Name | Status | 2025-06-02 20:40:17.703003 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 20:40:17.703015 | orchestrator | | 55d281de-70e7-4581-8dc5-4264fc198225 | test-3 | ACTIVE | 2025-06-02 20:40:17.703026 | orchestrator | | 4a95e195-d28c-466d-b590-4162bfc7f942 | test-1 | ACTIVE | 2025-06-02 20:40:17.703037 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:40:17.925667 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-06-02 20:40:20.703600 | orchestrator | 2025-06-02 20:40:20 | INFO  | Live migrating server f6bb06f0-2809-4245-af9b-f980eb79d018 2025-06-02 20:40:33.651192 | orchestrator | 2025-06-02 20:40:33 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:40:36.149320 | orchestrator | 2025-06-02 20:40:36 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:40:38.802738 | orchestrator | 2025-06-02 20:40:38 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:40:41.178831 | orchestrator | 2025-06-02 20:40:41 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:40:43.464010 | orchestrator | 2025-06-02 20:40:43 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:40:45.762619 | orchestrator | 2025-06-02 20:40:45 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:40:48.097207 | orchestrator | 2025-06-02 20:40:48 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:40:50.432330 | orchestrator | 2025-06-02 20:40:50 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:40:52.757868 | orchestrator | 2025-06-02 20:40:52 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:40:55.202894 | orchestrator | 2025-06-02 20:40:55 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) completed with status ACTIVE 2025-06-02 20:40:55.440717 | orchestrator | + compute_list 2025-06-02 20:40:55.440817 | orchestrator | + osism manage compute list testbed-node-3 2025-06-02 20:40:58.315993 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:40:58.316103 | orchestrator | | ID | Name | Status | 2025-06-02 20:40:58.316117 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 20:40:58.316129 | orchestrator | | 23c9e52c-28dc-4f12-af68-2c9727a87be6 | test-4 | ACTIVE | 2025-06-02 20:40:58.316140 | orchestrator | | c9e2f9fc-a1e4-4c90-a752-61e4469ea113 | test-2 | ACTIVE | 2025-06-02 20:40:58.316151 | orchestrator | | f6bb06f0-2809-4245-af9b-f980eb79d018 | test | ACTIVE | 2025-06-02 20:40:58.316162 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:40:58.538907 | orchestrator | + osism manage compute list testbed-node-4 2025-06-02 20:41:01.097450 | orchestrator | +------+--------+----------+ 2025-06-02 20:41:01.097627 | orchestrator | | ID | Name | Status | 2025-06-02 20:41:01.097646 | orchestrator | |------+--------+----------| 2025-06-02 20:41:01.097658 | orchestrator | +------+--------+----------+ 2025-06-02 20:41:01.332989 | orchestrator | + osism manage compute list testbed-node-5 2025-06-02 20:41:04.191033 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:41:04.191129 | orchestrator | | ID | Name | Status | 2025-06-02 20:41:04.191143 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 20:41:04.191152 | orchestrator | | 55d281de-70e7-4581-8dc5-4264fc198225 | test-3 | ACTIVE | 2025-06-02 20:41:04.191161 | orchestrator | | 4a95e195-d28c-466d-b590-4162bfc7f942 | test-1 | ACTIVE | 2025-06-02 20:41:04.191170 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:41:04.426513 | orchestrator | + server_ping 2025-06-02 20:41:04.427903 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 20:41:04.428037 | orchestrator | ++ tr -d '\r' 2025-06-02 20:41:07.219436 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:41:07.219603 | orchestrator | + ping -c3 192.168.112.147 2025-06-02 20:41:07.232915 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-06-02 20:41:07.233022 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=11.1 ms 2025-06-02 20:41:08.225958 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.70 ms 2025-06-02 20:41:09.227993 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.07 ms 2025-06-02 20:41:09.228098 | orchestrator | 2025-06-02 20:41:09.228114 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-06-02 20:41:09.228128 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:41:09.228140 | orchestrator | rtt min/avg/max/mdev = 2.074/5.278/11.066/4.100 ms 2025-06-02 20:41:09.228152 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:41:09.228164 | orchestrator | + ping -c3 192.168.112.185 2025-06-02 20:41:09.243827 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2025-06-02 20:41:09.243922 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=11.7 ms 2025-06-02 20:41:10.236849 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.61 ms 2025-06-02 20:41:11.237851 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.04 ms 2025-06-02 20:41:11.237958 | orchestrator | 2025-06-02 20:41:11.237974 | orchestrator | --- 192.168.112.185 ping statistics --- 2025-06-02 20:41:11.237987 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-02 20:41:11.237998 | orchestrator | rtt min/avg/max/mdev = 2.039/5.460/11.729/4.438 ms 2025-06-02 20:41:11.238383 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:41:11.238408 | orchestrator | + ping -c3 192.168.112.192 2025-06-02 20:41:11.248497 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-06-02 20:41:11.248546 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=7.58 ms 2025-06-02 20:41:12.245907 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.76 ms 2025-06-02 20:41:13.247429 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.76 ms 2025-06-02 20:41:13.247519 | orchestrator | 2025-06-02 20:41:13.247532 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-06-02 20:41:13.247542 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-02 20:41:13.247550 | orchestrator | rtt min/avg/max/mdev = 1.760/4.031/7.579/2.541 ms 2025-06-02 20:41:13.247630 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:41:13.247648 | orchestrator | + ping -c3 192.168.112.169 2025-06-02 20:41:13.257225 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-06-02 20:41:13.257294 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=5.12 ms 2025-06-02 20:41:14.255902 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.59 ms 2025-06-02 20:41:15.256209 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.78 ms 2025-06-02 20:41:15.256343 | orchestrator | 2025-06-02 20:41:15.256362 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-06-02 20:41:15.256378 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 20:41:15.256390 | orchestrator | rtt min/avg/max/mdev = 1.782/3.163/5.121/1.422 ms 2025-06-02 20:41:15.256837 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:41:15.256879 | orchestrator | + ping -c3 192.168.112.133 2025-06-02 20:41:15.269994 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-06-02 20:41:15.270133 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=8.13 ms 2025-06-02 20:41:16.266548 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=3.09 ms 2025-06-02 20:41:17.266648 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=2.00 ms 2025-06-02 20:41:17.266767 | orchestrator | 2025-06-02 20:41:17.266793 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-06-02 20:41:17.266815 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:41:17.266836 | orchestrator | rtt min/avg/max/mdev = 1.995/4.404/8.132/2.673 ms 2025-06-02 20:41:17.267520 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-06-02 20:41:20.093722 | orchestrator | 2025-06-02 20:41:20 | INFO  | Live migrating server 55d281de-70e7-4581-8dc5-4264fc198225 2025-06-02 20:41:32.846330 | orchestrator | 2025-06-02 20:41:32 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:41:35.171577 | orchestrator | 2025-06-02 20:41:35 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:41:37.542503 | orchestrator | 2025-06-02 20:41:37 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:41:39.812935 | orchestrator | 2025-06-02 20:41:39 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:41:42.108122 | orchestrator | 2025-06-02 20:41:42 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:41:44.372287 | orchestrator | 2025-06-02 20:41:44 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:41:46.655852 | orchestrator | 2025-06-02 20:41:46 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:41:49.167263 | orchestrator | 2025-06-02 20:41:49 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) completed with status ACTIVE 2025-06-02 20:41:49.168862 | orchestrator | 2025-06-02 20:41:49 | INFO  | Live migrating server 4a95e195-d28c-466d-b590-4162bfc7f942 2025-06-02 20:42:02.276949 | orchestrator | 2025-06-02 20:42:02 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:42:04.649816 | orchestrator | 2025-06-02 20:42:04 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:42:07.142713 | orchestrator | 2025-06-02 20:42:07 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:42:09.455650 | orchestrator | 2025-06-02 20:42:09 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:42:11.816629 | orchestrator | 2025-06-02 20:42:11 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:42:14.145194 | orchestrator | 2025-06-02 20:42:14 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:42:16.431631 | orchestrator | 2025-06-02 20:42:16 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) completed with status ACTIVE 2025-06-02 20:42:16.660074 | orchestrator | + compute_list 2025-06-02 20:42:16.660193 | orchestrator | + osism manage compute list testbed-node-3 2025-06-02 20:42:19.755953 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:42:19.756062 | orchestrator | | ID | Name | Status | 2025-06-02 20:42:19.756077 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 20:42:19.756105 | orchestrator | | 23c9e52c-28dc-4f12-af68-2c9727a87be6 | test-4 | ACTIVE | 2025-06-02 20:42:19.756116 | orchestrator | | 55d281de-70e7-4581-8dc5-4264fc198225 | test-3 | ACTIVE | 2025-06-02 20:42:19.756127 | orchestrator | | c9e2f9fc-a1e4-4c90-a752-61e4469ea113 | test-2 | ACTIVE | 2025-06-02 20:42:19.756138 | orchestrator | | 4a95e195-d28c-466d-b590-4162bfc7f942 | test-1 | ACTIVE | 2025-06-02 20:42:19.756149 | orchestrator | | f6bb06f0-2809-4245-af9b-f980eb79d018 | test | ACTIVE | 2025-06-02 20:42:19.756160 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:42:19.973869 | orchestrator | + osism manage compute list testbed-node-4 2025-06-02 20:42:22.458746 | orchestrator | +------+--------+----------+ 2025-06-02 20:42:22.458928 | orchestrator | | ID | Name | Status | 2025-06-02 20:42:22.458945 | orchestrator | |------+--------+----------| 2025-06-02 20:42:22.458957 | orchestrator | +------+--------+----------+ 2025-06-02 20:42:22.684038 | orchestrator | + osism manage compute list testbed-node-5 2025-06-02 20:42:25.275648 | orchestrator | +------+--------+----------+ 2025-06-02 20:42:25.275747 | orchestrator | | ID | Name | Status | 2025-06-02 20:42:25.275755 | orchestrator | |------+--------+----------| 2025-06-02 20:42:25.275763 | orchestrator | +------+--------+----------+ 2025-06-02 20:42:25.511540 | orchestrator | + server_ping 2025-06-02 20:42:25.512914 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 20:42:25.513950 | orchestrator | ++ tr -d '\r' 2025-06-02 20:42:28.263066 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:42:28.263179 | orchestrator | + ping -c3 192.168.112.147 2025-06-02 20:42:28.278418 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-06-02 20:42:28.278490 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=12.7 ms 2025-06-02 20:42:29.273181 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.73 ms 2025-06-02 20:42:30.271240 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=1.88 ms 2025-06-02 20:42:30.271346 | orchestrator | 2025-06-02 20:42:30.271362 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-06-02 20:42:30.271375 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:42:30.271386 | orchestrator | rtt min/avg/max/mdev = 1.884/5.753/12.650/4.888 ms 2025-06-02 20:42:30.271726 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:42:30.271754 | orchestrator | + ping -c3 192.168.112.185 2025-06-02 20:42:30.284219 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2025-06-02 20:42:30.284352 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=9.01 ms 2025-06-02 20:42:31.279361 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.90 ms 2025-06-02 20:42:32.279196 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.15 ms 2025-06-02 20:42:32.279281 | orchestrator | 2025-06-02 20:42:32.279290 | orchestrator | --- 192.168.112.185 ping statistics --- 2025-06-02 20:42:32.279298 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 20:42:32.279306 | orchestrator | rtt min/avg/max/mdev = 2.152/4.686/9.011/3.072 ms 2025-06-02 20:42:32.279700 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:42:32.279727 | orchestrator | + ping -c3 192.168.112.192 2025-06-02 20:42:32.294124 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-06-02 20:42:32.294173 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=9.72 ms 2025-06-02 20:42:33.288723 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.06 ms 2025-06-02 20:42:34.289560 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=2.02 ms 2025-06-02 20:42:34.289653 | orchestrator | 2025-06-02 20:42:34.289663 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-06-02 20:42:34.289672 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:42:34.289678 | orchestrator | rtt min/avg/max/mdev = 2.016/4.599/9.722/3.622 ms 2025-06-02 20:42:34.289748 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:42:34.289762 | orchestrator | + ping -c3 192.168.112.169 2025-06-02 20:42:34.300225 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-06-02 20:42:34.300319 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=6.21 ms 2025-06-02 20:42:35.298185 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.49 ms 2025-06-02 20:42:36.300324 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.07 ms 2025-06-02 20:42:36.300437 | orchestrator | 2025-06-02 20:42:36.300455 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-06-02 20:42:36.300468 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:42:36.300479 | orchestrator | rtt min/avg/max/mdev = 2.068/3.589/6.207/1.858 ms 2025-06-02 20:42:36.300491 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:42:36.300502 | orchestrator | + ping -c3 192.168.112.133 2025-06-02 20:42:36.314936 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-06-02 20:42:36.315042 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=10.1 ms 2025-06-02 20:42:37.307928 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.42 ms 2025-06-02 20:42:38.310002 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=2.27 ms 2025-06-02 20:42:38.310107 | orchestrator | 2025-06-02 20:42:38.310115 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-06-02 20:42:38.310121 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 20:42:38.310126 | orchestrator | rtt min/avg/max/mdev = 2.267/4.917/10.068/3.642 ms 2025-06-02 20:42:38.310132 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-06-02 20:42:41.540762 | orchestrator | 2025-06-02 20:42:41 | INFO  | Live migrating server 23c9e52c-28dc-4f12-af68-2c9727a87be6 2025-06-02 20:42:53.465871 | orchestrator | 2025-06-02 20:42:53 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:42:55.792660 | orchestrator | 2025-06-02 20:42:55 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:42:58.136043 | orchestrator | 2025-06-02 20:42:58 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:43:00.417650 | orchestrator | 2025-06-02 20:43:00 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:43:02.717412 | orchestrator | 2025-06-02 20:43:02 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:43:05.027672 | orchestrator | 2025-06-02 20:43:05 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:43:07.397285 | orchestrator | 2025-06-02 20:43:07 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) completed with status ACTIVE 2025-06-02 20:43:07.397386 | orchestrator | 2025-06-02 20:43:07 | INFO  | Live migrating server 55d281de-70e7-4581-8dc5-4264fc198225 2025-06-02 20:43:19.050402 | orchestrator | 2025-06-02 20:43:19 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:43:21.414625 | orchestrator | 2025-06-02 20:43:21 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:43:23.720468 | orchestrator | 2025-06-02 20:43:23 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:43:26.040527 | orchestrator | 2025-06-02 20:43:26 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:43:28.379341 | orchestrator | 2025-06-02 20:43:28 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:43:30.638386 | orchestrator | 2025-06-02 20:43:30 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:43:32.939868 | orchestrator | 2025-06-02 20:43:32 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:43:35.281460 | orchestrator | 2025-06-02 20:43:35 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) completed with status ACTIVE 2025-06-02 20:43:35.281558 | orchestrator | 2025-06-02 20:43:35 | INFO  | Live migrating server c9e2f9fc-a1e4-4c90-a752-61e4469ea113 2025-06-02 20:43:45.946597 | orchestrator | 2025-06-02 20:43:45 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:43:48.322218 | orchestrator | 2025-06-02 20:43:48 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:43:50.666876 | orchestrator | 2025-06-02 20:43:50 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:43:53.041890 | orchestrator | 2025-06-02 20:43:53 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:43:55.348771 | orchestrator | 2025-06-02 20:43:55 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:43:57.700230 | orchestrator | 2025-06-02 20:43:57 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:44:00.030767 | orchestrator | 2025-06-02 20:44:00 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) completed with status ACTIVE 2025-06-02 20:44:00.030889 | orchestrator | 2025-06-02 20:44:00 | INFO  | Live migrating server 4a95e195-d28c-466d-b590-4162bfc7f942 2025-06-02 20:44:10.835274 | orchestrator | 2025-06-02 20:44:10 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:44:13.197529 | orchestrator | 2025-06-02 20:44:13 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:44:15.552453 | orchestrator | 2025-06-02 20:44:15 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:44:17.818878 | orchestrator | 2025-06-02 20:44:17 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:44:20.071951 | orchestrator | 2025-06-02 20:44:20 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:44:22.360768 | orchestrator | 2025-06-02 20:44:22 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:44:24.722204 | orchestrator | 2025-06-02 20:44:24 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:44:27.003400 | orchestrator | 2025-06-02 20:44:27 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) completed with status ACTIVE 2025-06-02 20:44:27.003510 | orchestrator | 2025-06-02 20:44:27 | INFO  | Live migrating server f6bb06f0-2809-4245-af9b-f980eb79d018 2025-06-02 20:44:37.021763 | orchestrator | 2025-06-02 20:44:37 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:44:39.368633 | orchestrator | 2025-06-02 20:44:39 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:44:41.722805 | orchestrator | 2025-06-02 20:44:41 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:44:44.060510 | orchestrator | 2025-06-02 20:44:44 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:44:46.363462 | orchestrator | 2025-06-02 20:44:46 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:44:48.694796 | orchestrator | 2025-06-02 20:44:48 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:44:50.983154 | orchestrator | 2025-06-02 20:44:50 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:44:53.341765 | orchestrator | 2025-06-02 20:44:53 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:44:55.633735 | orchestrator | 2025-06-02 20:44:55 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) completed with status ACTIVE 2025-06-02 20:44:55.871555 | orchestrator | + compute_list 2025-06-02 20:44:55.871680 | orchestrator | + osism manage compute list testbed-node-3 2025-06-02 20:44:58.464981 | orchestrator | +------+--------+----------+ 2025-06-02 20:44:58.465105 | orchestrator | | ID | Name | Status | 2025-06-02 20:44:58.465130 | orchestrator | |------+--------+----------| 2025-06-02 20:44:58.465143 | orchestrator | +------+--------+----------+ 2025-06-02 20:44:58.718691 | orchestrator | + osism manage compute list testbed-node-4 2025-06-02 20:45:01.880879 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:45:01.881005 | orchestrator | | ID | Name | Status | 2025-06-02 20:45:01.881034 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 20:45:01.881055 | orchestrator | | 23c9e52c-28dc-4f12-af68-2c9727a87be6 | test-4 | ACTIVE | 2025-06-02 20:45:01.881080 | orchestrator | | 55d281de-70e7-4581-8dc5-4264fc198225 | test-3 | ACTIVE | 2025-06-02 20:45:01.881106 | orchestrator | | c9e2f9fc-a1e4-4c90-a752-61e4469ea113 | test-2 | ACTIVE | 2025-06-02 20:45:01.881126 | orchestrator | | 4a95e195-d28c-466d-b590-4162bfc7f942 | test-1 | ACTIVE | 2025-06-02 20:45:01.881148 | orchestrator | | f6bb06f0-2809-4245-af9b-f980eb79d018 | test | ACTIVE | 2025-06-02 20:45:01.881168 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:45:02.116817 | orchestrator | + osism manage compute list testbed-node-5 2025-06-02 20:45:04.602812 | orchestrator | +------+--------+----------+ 2025-06-02 20:45:04.602923 | orchestrator | | ID | Name | Status | 2025-06-02 20:45:04.602939 | orchestrator | |------+--------+----------| 2025-06-02 20:45:04.602951 | orchestrator | +------+--------+----------+ 2025-06-02 20:45:04.832020 | orchestrator | + server_ping 2025-06-02 20:45:04.832696 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 20:45:04.833080 | orchestrator | ++ tr -d '\r' 2025-06-02 20:45:07.865209 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:45:07.865427 | orchestrator | + ping -c3 192.168.112.147 2025-06-02 20:45:07.876312 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-06-02 20:45:07.876409 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=8.00 ms 2025-06-02 20:45:08.872587 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.95 ms 2025-06-02 20:45:09.873380 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.04 ms 2025-06-02 20:45:09.873503 | orchestrator | 2025-06-02 20:45:09.873521 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-06-02 20:45:09.873534 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 20:45:09.873546 | orchestrator | rtt min/avg/max/mdev = 2.039/4.329/8.001/2.622 ms 2025-06-02 20:45:09.873755 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:45:09.874361 | orchestrator | + ping -c3 192.168.112.185 2025-06-02 20:45:09.890802 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2025-06-02 20:45:09.890857 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=12.6 ms 2025-06-02 20:45:10.883308 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=3.14 ms 2025-06-02 20:45:11.884520 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.28 ms 2025-06-02 20:45:11.884642 | orchestrator | 2025-06-02 20:45:11.884666 | orchestrator | --- 192.168.112.185 ping statistics --- 2025-06-02 20:45:11.884688 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:45:11.884705 | orchestrator | rtt min/avg/max/mdev = 2.276/6.020/12.642/4.695 ms 2025-06-02 20:45:11.884718 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:45:11.884730 | orchestrator | + ping -c3 192.168.112.192 2025-06-02 20:45:11.894899 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-06-02 20:45:11.894958 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=6.03 ms 2025-06-02 20:45:12.892994 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.68 ms 2025-06-02 20:45:13.895147 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=2.07 ms 2025-06-02 20:45:13.895237 | orchestrator | 2025-06-02 20:45:13.895248 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-06-02 20:45:13.895258 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:45:13.895330 | orchestrator | rtt min/avg/max/mdev = 2.070/3.596/6.034/1.742 ms 2025-06-02 20:45:13.895338 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:45:13.895346 | orchestrator | + ping -c3 192.168.112.169 2025-06-02 20:45:13.907191 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-06-02 20:45:13.907293 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=7.14 ms 2025-06-02 20:45:14.904337 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.56 ms 2025-06-02 20:45:15.905208 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.64 ms 2025-06-02 20:45:15.905385 | orchestrator | 2025-06-02 20:45:15.905404 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-06-02 20:45:15.905417 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:45:15.905429 | orchestrator | rtt min/avg/max/mdev = 1.639/3.778/7.139/2.405 ms 2025-06-02 20:45:15.905742 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:45:15.905768 | orchestrator | + ping -c3 192.168.112.133 2025-06-02 20:45:15.917483 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-06-02 20:45:15.917579 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=6.94 ms 2025-06-02 20:45:16.914131 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.21 ms 2025-06-02 20:45:17.915589 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=2.24 ms 2025-06-02 20:45:17.915677 | orchestrator | 2025-06-02 20:45:17.915710 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-06-02 20:45:17.915720 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:45:17.915726 | orchestrator | rtt min/avg/max/mdev = 2.207/3.796/6.941/2.223 ms 2025-06-02 20:45:17.916093 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-06-02 20:45:20.908978 | orchestrator | 2025-06-02 20:45:20 | INFO  | Live migrating server 23c9e52c-28dc-4f12-af68-2c9727a87be6 2025-06-02 20:45:31.307566 | orchestrator | 2025-06-02 20:45:31 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:45:33.679390 | orchestrator | 2025-06-02 20:45:33 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:45:36.096156 | orchestrator | 2025-06-02 20:45:36 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:45:38.474825 | orchestrator | 2025-06-02 20:45:38 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:45:40.804536 | orchestrator | 2025-06-02 20:45:40 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:45:43.324624 | orchestrator | 2025-06-02 20:45:43 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:45:45.660101 | orchestrator | 2025-06-02 20:45:45 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) is still in progress 2025-06-02 20:45:47.927397 | orchestrator | 2025-06-02 20:45:47 | INFO  | Live migration of 23c9e52c-28dc-4f12-af68-2c9727a87be6 (test-4) completed with status ACTIVE 2025-06-02 20:45:47.927504 | orchestrator | 2025-06-02 20:45:47 | INFO  | Live migrating server 55d281de-70e7-4581-8dc5-4264fc198225 2025-06-02 20:45:57.604879 | orchestrator | 2025-06-02 20:45:57 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:45:59.949448 | orchestrator | 2025-06-02 20:45:59 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:46:02.288923 | orchestrator | 2025-06-02 20:46:02 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:46:04.587740 | orchestrator | 2025-06-02 20:46:04 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:46:06.878139 | orchestrator | 2025-06-02 20:46:06 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:46:09.229707 | orchestrator | 2025-06-02 20:46:09 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) is still in progress 2025-06-02 20:46:11.713887 | orchestrator | 2025-06-02 20:46:11 | INFO  | Live migration of 55d281de-70e7-4581-8dc5-4264fc198225 (test-3) completed with status ACTIVE 2025-06-02 20:46:11.715982 | orchestrator | 2025-06-02 20:46:11 | INFO  | Live migrating server c9e2f9fc-a1e4-4c90-a752-61e4469ea113 2025-06-02 20:46:21.916023 | orchestrator | 2025-06-02 20:46:21 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:46:24.273086 | orchestrator | 2025-06-02 20:46:24 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:46:26.750302 | orchestrator | 2025-06-02 20:46:26 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:46:29.087549 | orchestrator | 2025-06-02 20:46:29 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:46:31.330830 | orchestrator | 2025-06-02 20:46:31 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:46:33.598870 | orchestrator | 2025-06-02 20:46:33 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) is still in progress 2025-06-02 20:46:35.856353 | orchestrator | 2025-06-02 20:46:35 | INFO  | Live migration of c9e2f9fc-a1e4-4c90-a752-61e4469ea113 (test-2) completed with status ACTIVE 2025-06-02 20:46:35.856507 | orchestrator | 2025-06-02 20:46:35 | INFO  | Live migrating server 4a95e195-d28c-466d-b590-4162bfc7f942 2025-06-02 20:46:45.534717 | orchestrator | 2025-06-02 20:46:45 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:46:47.879270 | orchestrator | 2025-06-02 20:46:47 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:46:50.222328 | orchestrator | 2025-06-02 20:46:50 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:46:52.501808 | orchestrator | 2025-06-02 20:46:52 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:46:54.798663 | orchestrator | 2025-06-02 20:46:54 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:46:57.082863 | orchestrator | 2025-06-02 20:46:57 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:46:59.392874 | orchestrator | 2025-06-02 20:46:59 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) is still in progress 2025-06-02 20:47:01.725860 | orchestrator | 2025-06-02 20:47:01 | INFO  | Live migration of 4a95e195-d28c-466d-b590-4162bfc7f942 (test-1) completed with status ACTIVE 2025-06-02 20:47:01.725971 | orchestrator | 2025-06-02 20:47:01 | INFO  | Live migrating server f6bb06f0-2809-4245-af9b-f980eb79d018 2025-06-02 20:47:12.007909 | orchestrator | 2025-06-02 20:47:12 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:47:14.386860 | orchestrator | 2025-06-02 20:47:14 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:47:16.736100 | orchestrator | 2025-06-02 20:47:16 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:47:19.085883 | orchestrator | 2025-06-02 20:47:19 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:47:21.442849 | orchestrator | 2025-06-02 20:47:21 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:47:23.788051 | orchestrator | 2025-06-02 20:47:23 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:47:26.078291 | orchestrator | 2025-06-02 20:47:26 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:47:28.334634 | orchestrator | 2025-06-02 20:47:28 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) is still in progress 2025-06-02 20:47:30.690568 | orchestrator | 2025-06-02 20:47:30 | INFO  | Live migration of f6bb06f0-2809-4245-af9b-f980eb79d018 (test) completed with status ACTIVE 2025-06-02 20:47:30.921747 | orchestrator | + compute_list 2025-06-02 20:47:30.921847 | orchestrator | + osism manage compute list testbed-node-3 2025-06-02 20:47:33.361970 | orchestrator | +------+--------+----------+ 2025-06-02 20:47:33.362192 | orchestrator | | ID | Name | Status | 2025-06-02 20:47:33.362209 | orchestrator | |------+--------+----------| 2025-06-02 20:47:33.362220 | orchestrator | +------+--------+----------+ 2025-06-02 20:47:33.593459 | orchestrator | + osism manage compute list testbed-node-4 2025-06-02 20:47:36.099663 | orchestrator | +------+--------+----------+ 2025-06-02 20:47:36.099777 | orchestrator | | ID | Name | Status | 2025-06-02 20:47:36.099792 | orchestrator | |------+--------+----------| 2025-06-02 20:47:36.099803 | orchestrator | +------+--------+----------+ 2025-06-02 20:47:36.348868 | orchestrator | + osism manage compute list testbed-node-5 2025-06-02 20:47:39.347195 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:47:39.347333 | orchestrator | | ID | Name | Status | 2025-06-02 20:47:39.347361 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 20:47:39.347382 | orchestrator | | 23c9e52c-28dc-4f12-af68-2c9727a87be6 | test-4 | ACTIVE | 2025-06-02 20:47:39.347401 | orchestrator | | 55d281de-70e7-4581-8dc5-4264fc198225 | test-3 | ACTIVE | 2025-06-02 20:47:39.347421 | orchestrator | | c9e2f9fc-a1e4-4c90-a752-61e4469ea113 | test-2 | ACTIVE | 2025-06-02 20:47:39.347440 | orchestrator | | 4a95e195-d28c-466d-b590-4162bfc7f942 | test-1 | ACTIVE | 2025-06-02 20:47:39.347460 | orchestrator | | f6bb06f0-2809-4245-af9b-f980eb79d018 | test | ACTIVE | 2025-06-02 20:47:39.347479 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 20:47:39.579814 | orchestrator | + server_ping 2025-06-02 20:47:39.580656 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 20:47:39.581271 | orchestrator | ++ tr -d '\r' 2025-06-02 20:47:42.340268 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:47:42.340339 | orchestrator | + ping -c3 192.168.112.147 2025-06-02 20:47:42.357910 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-06-02 20:47:42.357999 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=13.7 ms 2025-06-02 20:47:43.348667 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.56 ms 2025-06-02 20:47:44.351503 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.57 ms 2025-06-02 20:47:44.351644 | orchestrator | 2025-06-02 20:47:44.351662 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-06-02 20:47:44.351689 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-02 20:47:44.351701 | orchestrator | rtt min/avg/max/mdev = 2.562/6.276/13.694/5.244 ms 2025-06-02 20:47:44.351749 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:47:44.352965 | orchestrator | + ping -c3 192.168.112.185 2025-06-02 20:47:44.364313 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2025-06-02 20:47:44.364391 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=8.02 ms 2025-06-02 20:47:45.358709 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=3.02 ms 2025-06-02 20:47:46.359470 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.37 ms 2025-06-02 20:47:46.359580 | orchestrator | 2025-06-02 20:47:46.359596 | orchestrator | --- 192.168.112.185 ping statistics --- 2025-06-02 20:47:46.359660 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:47:46.359673 | orchestrator | rtt min/avg/max/mdev = 2.374/4.468/8.017/2.522 ms 2025-06-02 20:47:46.360077 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:47:46.360102 | orchestrator | + ping -c3 192.168.112.192 2025-06-02 20:47:46.371153 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-06-02 20:47:46.371231 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=6.44 ms 2025-06-02 20:47:47.369341 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.64 ms 2025-06-02 20:47:48.370591 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.77 ms 2025-06-02 20:47:48.370754 | orchestrator | 2025-06-02 20:47:48.370772 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-06-02 20:47:48.370786 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-02 20:47:48.370797 | orchestrator | rtt min/avg/max/mdev = 1.768/3.618/6.444/2.029 ms 2025-06-02 20:47:48.371268 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:47:48.371295 | orchestrator | + ping -c3 192.168.112.169 2025-06-02 20:47:48.381069 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-06-02 20:47:48.381128 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=5.68 ms 2025-06-02 20:47:49.379020 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.31 ms 2025-06-02 20:47:50.380475 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.68 ms 2025-06-02 20:47:50.380577 | orchestrator | 2025-06-02 20:47:50.380666 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-06-02 20:47:50.380679 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:47:50.380689 | orchestrator | rtt min/avg/max/mdev = 1.679/3.219/5.675/1.754 ms 2025-06-02 20:47:50.380779 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:47:50.380794 | orchestrator | + ping -c3 192.168.112.133 2025-06-02 20:47:50.394280 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-06-02 20:47:50.394389 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=8.02 ms 2025-06-02 20:47:51.388985 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.12 ms 2025-06-02 20:47:52.390588 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.75 ms 2025-06-02 20:47:52.390744 | orchestrator | 2025-06-02 20:47:52.390761 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-06-02 20:47:52.390774 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 20:47:52.390786 | orchestrator | rtt min/avg/max/mdev = 1.753/3.964/8.022/2.872 ms 2025-06-02 20:47:52.806600 | orchestrator | ok: Runtime: 0:18:47.145016 2025-06-02 20:47:52.858867 | 2025-06-02 20:47:52.859053 | TASK [Run tempest] 2025-06-02 20:47:53.397466 | orchestrator | skipping: Conditional result was False 2025-06-02 20:47:53.416311 | 2025-06-02 20:47:53.416489 | TASK [Check prometheus alert status] 2025-06-02 20:47:53.955265 | orchestrator | skipping: Conditional result was False 2025-06-02 20:47:53.957139 | 2025-06-02 20:47:53.957286 | PLAY RECAP 2025-06-02 20:47:53.957381 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-02 20:47:53.957421 | 2025-06-02 20:47:54.202263 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-02 20:47:54.204929 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 20:47:54.924239 | 2025-06-02 20:47:54.924488 | PLAY [Post output play] 2025-06-02 20:47:54.940975 | 2025-06-02 20:47:54.941128 | LOOP [stage-output : Register sources] 2025-06-02 20:47:55.006018 | 2025-06-02 20:47:55.006288 | TASK [stage-output : Check sudo] 2025-06-02 20:47:55.850146 | orchestrator | sudo: a password is required 2025-06-02 20:47:56.046882 | orchestrator | ok: Runtime: 0:00:00.009970 2025-06-02 20:47:56.060132 | 2025-06-02 20:47:56.060295 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-02 20:47:56.109433 | 2025-06-02 20:47:56.109769 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-02 20:47:56.188348 | orchestrator | ok 2025-06-02 20:47:56.197002 | 2025-06-02 20:47:56.197134 | LOOP [stage-output : Ensure target folders exist] 2025-06-02 20:47:56.640761 | orchestrator | ok: "docs" 2025-06-02 20:47:56.641054 | 2025-06-02 20:47:56.889766 | orchestrator | ok: "artifacts" 2025-06-02 20:47:57.145185 | orchestrator | ok: "logs" 2025-06-02 20:47:57.167538 | 2025-06-02 20:47:57.167763 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-02 20:47:57.205274 | 2025-06-02 20:47:57.205548 | TASK [stage-output : Make all log files readable] 2025-06-02 20:47:57.508962 | orchestrator | ok 2025-06-02 20:47:57.521494 | 2025-06-02 20:47:57.521802 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-02 20:47:57.559383 | orchestrator | skipping: Conditional result was False 2025-06-02 20:47:57.578013 | 2025-06-02 20:47:57.578181 | TASK [stage-output : Discover log files for compression] 2025-06-02 20:47:57.602969 | orchestrator | skipping: Conditional result was False 2025-06-02 20:47:57.618518 | 2025-06-02 20:47:57.618704 | LOOP [stage-output : Archive everything from logs] 2025-06-02 20:47:57.665535 | 2025-06-02 20:47:57.665778 | PLAY [Post cleanup play] 2025-06-02 20:47:57.674686 | 2025-06-02 20:47:57.674808 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 20:47:57.722041 | orchestrator | ok 2025-06-02 20:47:57.733565 | 2025-06-02 20:47:57.733735 | TASK [Set cloud fact (local deployment)] 2025-06-02 20:47:57.757971 | orchestrator | skipping: Conditional result was False 2025-06-02 20:47:57.777270 | 2025-06-02 20:47:57.777455 | TASK [Clean the cloud environment] 2025-06-02 20:47:58.555186 | orchestrator | 2025-06-02 20:47:58 - clean up servers 2025-06-02 20:47:59.268947 | orchestrator | 2025-06-02 20:47:59 - testbed-manager 2025-06-02 20:47:59.359770 | orchestrator | 2025-06-02 20:47:59 - testbed-node-3 2025-06-02 20:47:59.444858 | orchestrator | 2025-06-02 20:47:59 - testbed-node-4 2025-06-02 20:47:59.540338 | orchestrator | 2025-06-02 20:47:59 - testbed-node-0 2025-06-02 20:47:59.636314 | orchestrator | 2025-06-02 20:47:59 - testbed-node-2 2025-06-02 20:47:59.730536 | orchestrator | 2025-06-02 20:47:59 - testbed-node-1 2025-06-02 20:47:59.835909 | orchestrator | 2025-06-02 20:47:59 - testbed-node-5 2025-06-02 20:47:59.920504 | orchestrator | 2025-06-02 20:47:59 - clean up keypairs 2025-06-02 20:47:59.937182 | orchestrator | 2025-06-02 20:47:59 - testbed 2025-06-02 20:47:59.963346 | orchestrator | 2025-06-02 20:47:59 - wait for servers to be gone 2025-06-02 20:48:10.764268 | orchestrator | 2025-06-02 20:48:10 - clean up ports 2025-06-02 20:48:10.949426 | orchestrator | 2025-06-02 20:48:10 - 31095d61-b4c1-4659-b5df-164622dbc481 2025-06-02 20:48:11.228724 | orchestrator | 2025-06-02 20:48:11 - 559fa97b-36fe-4453-8413-e5557202667e 2025-06-02 20:48:11.477524 | orchestrator | 2025-06-02 20:48:11 - a87cf084-c445-4a5a-9c5c-3ef3b1bc1d7d 2025-06-02 20:48:12.192459 | orchestrator | 2025-06-02 20:48:12 - c5ba1c70-6b76-46fd-8f89-a032351c9724 2025-06-02 20:48:12.400831 | orchestrator | 2025-06-02 20:48:12 - cff32b77-8cc8-44c9-a519-2bd0dbad8ad1 2025-06-02 20:48:12.835792 | orchestrator | 2025-06-02 20:48:12 - e1ee4ce7-1ca5-40f9-bd86-dd0b8876a1ef 2025-06-02 20:48:13.041447 | orchestrator | 2025-06-02 20:48:13 - fe3ec940-dd48-42fd-a6d7-51490e43ea18 2025-06-02 20:48:13.269174 | orchestrator | 2025-06-02 20:48:13 - clean up volumes 2025-06-02 20:48:13.379463 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-manager-base 2025-06-02 20:48:13.419231 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-0-node-base 2025-06-02 20:48:13.464429 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-2-node-base 2025-06-02 20:48:13.512421 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-1-node-base 2025-06-02 20:48:13.555531 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-4-node-base 2025-06-02 20:48:13.597556 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-3-node-base 2025-06-02 20:48:13.638975 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-5-node-base 2025-06-02 20:48:13.676772 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-0-node-3 2025-06-02 20:48:13.717819 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-3-node-3 2025-06-02 20:48:13.759031 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-4-node-4 2025-06-02 20:48:13.811332 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-8-node-5 2025-06-02 20:48:13.854910 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-7-node-4 2025-06-02 20:48:13.901685 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-1-node-4 2025-06-02 20:48:13.948089 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-5-node-5 2025-06-02 20:48:13.989606 | orchestrator | 2025-06-02 20:48:13 - testbed-volume-2-node-5 2025-06-02 20:48:14.032150 | orchestrator | 2025-06-02 20:48:14 - testbed-volume-6-node-3 2025-06-02 20:48:14.080237 | orchestrator | 2025-06-02 20:48:14 - disconnect routers 2025-06-02 20:48:14.624006 | orchestrator | 2025-06-02 20:48:14 - testbed 2025-06-02 20:48:15.647522 | orchestrator | 2025-06-02 20:48:15 - clean up subnets 2025-06-02 20:48:15.716335 | orchestrator | 2025-06-02 20:48:15 - subnet-testbed-management 2025-06-02 20:48:15.873462 | orchestrator | 2025-06-02 20:48:15 - clean up networks 2025-06-02 20:48:16.035334 | orchestrator | 2025-06-02 20:48:16 - net-testbed-management 2025-06-02 20:48:16.345857 | orchestrator | 2025-06-02 20:48:16 - clean up security groups 2025-06-02 20:48:16.388917 | orchestrator | 2025-06-02 20:48:16 - testbed-node 2025-06-02 20:48:16.527591 | orchestrator | 2025-06-02 20:48:16 - testbed-management 2025-06-02 20:48:16.658702 | orchestrator | 2025-06-02 20:48:16 - clean up floating ips 2025-06-02 20:48:16.698114 | orchestrator | 2025-06-02 20:48:16 - 81.163.193.18 2025-06-02 20:48:17.035757 | orchestrator | 2025-06-02 20:48:17 - clean up routers 2025-06-02 20:48:17.163878 | orchestrator | 2025-06-02 20:48:17 - testbed 2025-06-02 20:48:18.336865 | orchestrator | ok: Runtime: 0:00:19.951146 2025-06-02 20:48:18.341492 | 2025-06-02 20:48:18.341700 | PLAY RECAP 2025-06-02 20:48:18.341830 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-02 20:48:18.341893 | 2025-06-02 20:48:18.492238 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 20:48:18.494585 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 20:48:19.237243 | 2025-06-02 20:48:19.237410 | PLAY [Cleanup play] 2025-06-02 20:48:19.253470 | 2025-06-02 20:48:19.253598 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 20:48:19.312406 | orchestrator | ok 2025-06-02 20:48:19.322972 | 2025-06-02 20:48:19.323144 | TASK [Set cloud fact (local deployment)] 2025-06-02 20:48:19.348421 | orchestrator | skipping: Conditional result was False 2025-06-02 20:48:19.361839 | 2025-06-02 20:48:19.361969 | TASK [Clean the cloud environment] 2025-06-02 20:48:20.505128 | orchestrator | 2025-06-02 20:48:20 - clean up servers 2025-06-02 20:48:20.975754 | orchestrator | 2025-06-02 20:48:20 - clean up keypairs 2025-06-02 20:48:20.994711 | orchestrator | 2025-06-02 20:48:20 - wait for servers to be gone 2025-06-02 20:48:21.036177 | orchestrator | 2025-06-02 20:48:21 - clean up ports 2025-06-02 20:48:21.114430 | orchestrator | 2025-06-02 20:48:21 - clean up volumes 2025-06-02 20:48:21.185090 | orchestrator | 2025-06-02 20:48:21 - disconnect routers 2025-06-02 20:48:21.214905 | orchestrator | 2025-06-02 20:48:21 - clean up subnets 2025-06-02 20:48:21.237517 | orchestrator | 2025-06-02 20:48:21 - clean up networks 2025-06-02 20:48:21.369220 | orchestrator | 2025-06-02 20:48:21 - clean up security groups 2025-06-02 20:48:21.406495 | orchestrator | 2025-06-02 20:48:21 - clean up floating ips 2025-06-02 20:48:21.434582 | orchestrator | 2025-06-02 20:48:21 - clean up routers 2025-06-02 20:48:21.625679 | orchestrator | ok: Runtime: 0:00:01.311185 2025-06-02 20:48:21.629802 | 2025-06-02 20:48:21.629956 | PLAY RECAP 2025-06-02 20:48:21.630091 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-02 20:48:21.630145 | 2025-06-02 20:48:21.786590 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 20:48:21.790712 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 20:48:22.542944 | 2025-06-02 20:48:22.543116 | PLAY [Base post-fetch] 2025-06-02 20:48:22.558826 | 2025-06-02 20:48:22.558995 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-02 20:48:22.615389 | orchestrator | skipping: Conditional result was False 2025-06-02 20:48:22.627851 | 2025-06-02 20:48:22.628064 | TASK [fetch-output : Set log path for single node] 2025-06-02 20:48:22.679064 | orchestrator | ok 2025-06-02 20:48:22.689700 | 2025-06-02 20:48:22.689861 | LOOP [fetch-output : Ensure local output dirs] 2025-06-02 20:48:23.165858 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/967e2dde244849e8aeeebb16e5f5ee2e/work/logs" 2025-06-02 20:48:23.441062 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/967e2dde244849e8aeeebb16e5f5ee2e/work/artifacts" 2025-06-02 20:48:23.718501 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/967e2dde244849e8aeeebb16e5f5ee2e/work/docs" 2025-06-02 20:48:23.744105 | 2025-06-02 20:48:23.744228 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-02 20:48:24.668280 | orchestrator | changed: .d..t...... ./ 2025-06-02 20:48:24.668783 | orchestrator | changed: All items complete 2025-06-02 20:48:24.668870 | 2025-06-02 20:48:25.398191 | orchestrator | changed: .d..t...... ./ 2025-06-02 20:48:26.133356 | orchestrator | changed: .d..t...... ./ 2025-06-02 20:48:26.166755 | 2025-06-02 20:48:26.166999 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-02 20:48:26.202884 | orchestrator | skipping: Conditional result was False 2025-06-02 20:48:26.205248 | orchestrator | skipping: Conditional result was False 2025-06-02 20:48:26.223280 | 2025-06-02 20:48:26.223421 | PLAY RECAP 2025-06-02 20:48:26.223518 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-02 20:48:26.223567 | 2025-06-02 20:48:26.356977 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 20:48:26.359375 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 20:48:27.153571 | 2025-06-02 20:48:27.153783 | PLAY [Base post] 2025-06-02 20:48:27.168286 | 2025-06-02 20:48:27.168427 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-02 20:48:28.144404 | orchestrator | changed 2025-06-02 20:48:28.151929 | 2025-06-02 20:48:28.152055 | PLAY RECAP 2025-06-02 20:48:28.152122 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-02 20:48:28.152183 | 2025-06-02 20:48:28.278563 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 20:48:28.280947 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-02 20:48:29.090955 | 2025-06-02 20:48:29.091143 | PLAY [Base post-logs] 2025-06-02 20:48:29.102079 | 2025-06-02 20:48:29.102221 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-02 20:48:29.569835 | localhost | changed 2025-06-02 20:48:29.591385 | 2025-06-02 20:48:29.591580 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-02 20:48:29.630316 | localhost | ok 2025-06-02 20:48:29.637207 | 2025-06-02 20:48:29.637378 | TASK [Set zuul-log-path fact] 2025-06-02 20:48:29.656399 | localhost | ok 2025-06-02 20:48:29.669406 | 2025-06-02 20:48:29.669534 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 20:48:29.698707 | localhost | ok 2025-06-02 20:48:29.706313 | 2025-06-02 20:48:29.706505 | TASK [upload-logs : Create log directories] 2025-06-02 20:48:30.293001 | localhost | changed 2025-06-02 20:48:30.295786 | 2025-06-02 20:48:30.295892 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-02 20:48:30.807994 | localhost -> localhost | ok: Runtime: 0:00:00.005117 2025-06-02 20:48:30.817174 | 2025-06-02 20:48:30.817394 | TASK [upload-logs : Upload logs to log server] 2025-06-02 20:48:31.397464 | localhost | Output suppressed because no_log was given 2025-06-02 20:48:31.401820 | 2025-06-02 20:48:31.402039 | LOOP [upload-logs : Compress console log and json output] 2025-06-02 20:48:31.464330 | localhost | skipping: Conditional result was False 2025-06-02 20:48:31.472124 | localhost | skipping: Conditional result was False 2025-06-02 20:48:31.483064 | 2025-06-02 20:48:31.483229 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-02 20:48:31.536794 | localhost | skipping: Conditional result was False 2025-06-02 20:48:31.537480 | 2025-06-02 20:48:31.540858 | localhost | skipping: Conditional result was False 2025-06-02 20:48:31.555535 | 2025-06-02 20:48:31.555976 | LOOP [upload-logs : Upload console log and json output]